• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1108
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2123
  • 526
  • 522
  • 490
  • 437
  • 358
  • 343
  • 318
  • 282
  • 270
  • 270
  • 264
  • 236
  • 180
  • 175
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Feature Set Evaluation For A Generic Missile Detection System

Avan, Selcuk Kazim 01 February 2007 (has links) (PDF)
Missile Detection System (MDS) is one of the main components of a self-protection system developed against the threat of guided missiles for airborne platforms. The requirements such as time critical operation and high accuracy in classification performance make the &lsquo / Pattern Recognition&rsquo / problem of an MDS a hard task. Problem can be defined in two main parts such as &lsquo / Feature Set Evaluation&rsquo / (FSE) and &lsquo / Classifier&rsquo / designs. The main goal of feature set evaluation is to employ a dimensionality reduction process for the input data set, while not disturbing the classification performance in the result. In this thesis study, FSE approaches are investigated for the pattern recognition problem of a generic MDS. First, synthetic data generation is carried out in software environment by employing generic models and assumptions in order to reflect the nature of a realistic problem environment. Then, data sets are evaluated in order to draw a baseline for further feature set evaluation approaches. Further, a theoretical background including the concepts of Class Separability, Feature Selection and Feature Extraction is given. Several widely used methods are assessed in terms of convenience for the problem by giving necessary justifications depending on the data set characteristics. Upon this background, software implementations are performed regarding several feature set evaluation techniques. Simulations are carried out in order to process dimensionality reduction. For the evaluation of the resulting data sets in terms of classification performance, software implementation of a classifier is realized. Resulting classification performances of the applied approaches are compared and evaluated.
162

An Xml-based Feature Modeling Language

Nabdel, Leili 01 October 2011 (has links) (PDF)
Feature modeling is a common way of representing commonality and variability in Software Product Lines. There are alternative notations reported in the literature to represent feature models. Compared to the graphical notations, the text-based notations are more amenable to automated processing and tool interoperability. This study presents an XML-based feature modeling language to represent extended feature models that can include complex relationships involving attributes. We first provide a Context Free Grammar for the extended feature model definitions including such complex relationships. Then we build the XML Schema Definitions and present a number of XML instances in accordance with the defined schema. In addition, we discuss a validation process for the validation of the XML instances against the defined schema, which also includes additional tasks such as well-formedness checking for the XML instances.
163

Požymių diagramų ir uml klasių diagramų integravimo tyrimas / Research on feature diagram and uml class diagram integration

Žaliaduonis, Paulius 26 August 2010 (has links)
Programų sistemų kūrimas, kai yra daug užsakovų, kurių reikalavimai skiriasi, yra sudėtingas procesas ir reikalauja aprašyti galimus programų sistemos variantus. Programų variantiškumui aprašyti naudojami kuriamos sistemos požymių modeliai. Sistemos požymių modeliavimas yra svarbus variantiškumo aprašymo metodas. Sistemos požymių variantiškumo modeliai aprašo aibę programų sistemų, kurios dar vadinamos programų sistemų linija. Programų sistemų linija yra eilė panašių programų kurios dalinasi bendrais atributais. Tiksliau apibūdinti programų sistemų linijai yra nustatomi sistemų atributai ir jų tarpusavio sąryšiai, jie yra pavaizduojami požymių diagramose. Požymis tai savitas, charakteringas sistemos atributas, kuris nusako matomus sistemos atributus, tačiau nesigilina į detalų sistemos apibūdinimą. Greitam ir kokybiškam programų sistemos variantiškumo modeliavimui reikalingas geras įrankis. Tam skirtas požymių diagramų modeliavimo įrankis, nes sukurti požymių modeliai yra informatyvūs ir gali lengvai perteikti sistemos variantiškumo informaciją. Tačiau programų sistemos požymių diagrama neturi techninės informacijos, kuri yra reikalinga programos kūrimui. Ši informacija yra saugoma UML modeliuose. Programos UML modelį galima išplėsti variantiškumo informacija, papildant jį sistemos požymių modelio informacija. Magistrinio projekto metu buvo sukurtas įrankis (FD2), kuris įgyvendina požymių diagramos susiejimą su UML klasių diagrama. Magistriniame darbe tiriamas sistemų... [toliau žr. visą tekstą] / Feature modeling is important approach to deal system variability at higher abstraction level. Variability models define the variability of a software product line. Unfortunately, it is not integrated into a modeling framework like the Unified Modeling Language (UML). To use it in conjunction with UML, it is important to integrate feature modeling into UML. This thesis describes the way how feature variability models can be linked with existing UML models and how it is done in the feature modeling tool FD2. The feature modeling tool is described and the complete example provided. Chapter 2 discusses the way of Feature model integration with UML model. Chapter 3 describes the implementation of FD2 tool. Chapter 4 discusses the advantages and disadvantages of FD2 tool. Chapter 5 provides examples and discusses their results. In conclusion this thesis propose feature modeling integration with UML modeling, discusses the program developed during master project, provides 2 examples and discusses their results, points out some issues requiring further work.
164

A Feature-Oriented Modelling Language and a Feature-Interaction Taxonomy for Product-Line Requirements

Shaker, Pourya 22 November 2013 (has links)
Many organizations specialize in the development of families of software systems, called software product lines (SPLs), for one or more domains (e.g., automotive, telephony, health care). SPLs are commonly developed as a shared set of assets representing the common and variable aspects of an SPL, and individual products are constructed by assembling the right combinations of assets. The feature-oriented software development (FOSD) paradigm advocates the use of system features as the primary unit of commonality and variability among the products of an SPL. A feature represents a coherent and identifiable bundle of system functionality, such as call waiting in telephony and cruise control in an automobile. Furthermore, FOSD aims at feature-oriented artifacts (FOAs); that is, software-development artifacts that explicate features, so that a clear mapping is established between a feature and its representation in different artifacts. The thesis first identifies the problem of developing a suitable language for expressing feature-oriented models of the functional requirements of an SPL, and then presents the feature-oriented requirements modelling language (FORML) as a solution to this problem. FORML's notation is based on standard software-engineering notations (e.g., UML class and state-machine models, feature models) to ease adoption by practitioners, and has a precise syntax and semantics to enable analysis. The novelty of FORML is in adding feature-orientation to state-of-the-art requirements modelling approaches (e.g., KAOS), and in the systematic treatment of modelling evolutions of an SPL via enhancements to existing features. An existing feature can be enhanced by extending or modifying its requirements. Enhancements that modify a feature's requirements are called intended feature interactions. For example, the call waiting feature in telephony intentionally overrides the basic call service feature's treatment of incoming calls when the subscriber is already involved in a call. FORML prescribes different constructs for specifying different types of enhancements in state-machine models of requirements. Furthermore, unlike some prominent approaches (e.g., AHEAD, DFC), FORML's constructs for modelling intended feature interactions do not depend on the order in which features are composed; this can lead to savings in analysis costs, since only one rather than (possibly) multiple composition orders need to be analyzed. A well-known challenge in FOSD is managing feature interactions, which, informally defined, are ways in which different features can influence one another in defining the overall properties and behaviours of their combination. Some feature interactions are intended, as described above, while other feature interactions are unintended: for example, the cruise control and anti-lock braking system features of an automobile may have incompatible affects on the automobile's acceleration, which would make their combination inconsistent. Unintended feature interactions should be detected and resolved. To detect unintended interactions in models of feature behaviour, we must first define a taxonomy of feature interactions for the modelling language: that is, we must understand the different ways that feature interactions can manifest among features expressed in the language. The thesis presents a taxonomy of feature interactions for FORML that is an adaptation of existing taxonomies for operational models of feature behaviour. The novelty of the proposed taxonomy is that it presents a definition of behaviour modification that generalizes special cases found in the literature; and it enables feature-interaction analyses that report only unintended interactions, by excluding interactions caused by FORML's constructs for modelling intended feature interactions.
165

Probabilistic Shape Parsing and Action Recognition Through Binary Spatio-Temporal Feature Description

Whiten, Christopher J. 09 April 2013 (has links)
In this thesis, contributions are presented in the areas of shape parsing for view-based object recognition and spatio-temporal feature description for action recognition. A probabilistic model for parsing shapes into several distinguishable parts for accurate shape recognition is presented. This approach is based on robust geometric features that permit high recognition accuracy. As the second contribution in this thesis, a binary spatio-temporal feature descriptor is presented. Recent work shows that binary spatial feature descriptors are effective for increasing the efficiency of object recognition, while retaining comparable performance to state of the art descriptors. An extension of these approaches to action recognition is presented, facilitating huge gains in efficiency due to the computational advantage of computing a bag-of-words representation with the Hamming distance. A scene's motion and appearance is encoded with a short binary string. Exploiting the binary makeup of this descriptor greatly increases the efficiency while retaining competitive recognition performance.
166

Feature analysis of functional mri data for mapping epileptic networks

Burrell, Lauren S. 17 November 2008 (has links)
This research focused on the development of a methodology for analyzing functional magnetic resonance imaging (fMRI) data collected from patients with epilepsy in order to map epileptic networks. Epilepsy, a chronic neurological disorder characterized by recurrent, unprovoked seizures, affects up to 1% of the world's population. Antiepileptic drug therapies either do not successfully control seizures or have unacceptable side effects in over 30% of patients. Approximately one-third of patients whose seizures cannot be controlled by medication are candidates for surgical removal of the affected area of the brain, potentially rendering them seizure free. Accurate localization of the epileptogenic focus, i.e., the area of seizure onset, is critical for the best surgical outcome. The main objective of the research was to develop a set of fMRI data features that could be used to distinguish between normal brain tissue and the epileptic focus. To determine the best combination of features from various domains for mapping the focus, genetic programming and several feature selection methods were employed. These composite features and feature sets were subsequently used to train a classifier capable of discriminating between the two classes of voxels. The classifier was then applied to a separate testing set in order to generate maps showing brain voxels labeled as either normal or epileptogenic based on the best feature or set of features. It should be noted that although this work focuses on the application of fMRI analysis to epilepsy data, similar techniques could be used when studying brain activations due to other sources. In addition to investigating in vivo data collected from temporal lobe epilepsy patients with uncertain epileptic foci, phantom (simulated) data were created and processed to provide quantitative measures of the efficacy of the techniques.
167

Public Health Surveillance in High-Dimensions with Supervised Learning

January 2010 (has links)
abstract: Public health surveillance is a special case of the general problem where counts (or rates) of events are monitored for changes. Modern data complements event counts with many additional measurements (such as geographic, demographic, and others) that comprise high-dimensional covariates. This leads to an important challenge to detect a change that only occurs within a region, initially unspecified, defined by these covariates. Current methods are typically limited to spatial and/or temporal covariate information and often fail to use all the information available in modern data that can be paramount in unveiling these subtle changes. Additional complexities associated with modern health data that are often not accounted for by traditional methods include: covariates of mixed type, missing values, and high-order interactions among covariates. This work proposes a transform of public health surveillance to supervised learning, so that an appropriate learner can inherently address all the complexities described previously. At the same time, quantitative measures from the learner can be used to define signal criteria to detect changes in rates of events. A Feature Selection (FS) method is used to identify covariates that contribute to a model and to generate a signal. A measure of statistical significance is included to control false alarms. An alternative Percentile method identifies the specific cases that lead to changes using class probability estimates from tree-based ensembles. This second method is intended to be less computationally intensive and significantly simpler to implement. Finally, a third method labeled Rule-Based Feature Value Selection (RBFVS) is proposed for identifying the specific regions in high-dimensional space where the changes are occurring. Results on simulated examples are used to compare the FS method and the Percentile method. Note this work emphasizes the application of the proposed methods on public health surveillance. Nonetheless, these methods can easily be extended to a variety of applications where counts (or rates) of events are monitored for changes. Such problems commonly occur in domains such as manufacturing, economics, environmental systems, engineering, as well as in public health. / Dissertation/Thesis / Ph.D. Industrial Engineering 2010
168

Identifica??o Visual de Caixas de Medicamentos Usando Features Correspondentes

Benjamim, Xiankleber Cavalcante 30 July 2012 (has links)
Made available in DSpace on 2014-12-17T14:56:08Z (GMT). No. of bitstreams: 1 XiankleberCB_DISSERT.pdf: 1439530 bytes, checksum: cc79fa3fc529cac979cdaba813d4af67 (MD5) Previous issue date: 2012-07-30 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / This work uses computer vision algorithms related to features in the identification of medicine boxes for the visually impaired. The system is for people who have a disease that compromises his vision, hindering the identification of the correct medicine to be ingested. We use the camera, available in several popular devices such as computers, televisions and phones, to identify the box of the correct medicine and audio through the image, showing the poor information about the medication, such: as the dosage, indication and contraindications of the medication. We utilize a model of object detection using algorithms to identify the features in the boxes of drugs and playing the audio at the time of detection of feauteres in those boxes. Experiments carried out with 15 people show that where 93 % think that the system is useful and very helpful in identifying drugs for boxes. So, it is necessary to make use of this technology to help several people with visual impairments to take the right medicine, at the time indicated in advance by the physician / Este trabalho utiliza algoritmos de vis?o computacional relacionados ?s features na identifica??o de caixas de medicamentos para deficientes visuais. O sistema ? para pessoas que apresentam alguma enfermidade que comprometa sua vis?o, prejudicando a identifica??o do medicamento correto a ser ingerido. Utilizamos a c?mera, dispon?vel em v?rios dispositivos populares como computadores, televisores e celulares, para identificar a caixa do medicamento correto atrav?s da imagem e ?udio, mostrando ao deficiente as informa??es sobre a medica??o, tais como: a posologia, indica??o e contra indica??es da medica??o. Para isso, utilizamos um modelo de detec??o de objetos, usando algoritmos, para identificar as features nas caixas dos medicamentos e tocando o ?udio na hora da detec??o das feauteres nas referidas caixas. Os experimentos realizados com 15 pessoas mostram que onde 93% acreditam que o sistema ? ?til e muito ?til para identificar os medicamentos pelas caixas. Portanto, torna-se necess?rio fazer uso dessa tecnologia para ajudar v?rias pessoas com defici?ncia visual a tomarem o medicamento certo, na hora indicada, previamente pelo m?dico
169

Characterization of Coronary Atherosclerotic Plaques by Dual Energy Computed Tomography

January 2013 (has links)
abstract: Coronary heart disease (CHD) is the most prevalent cause of death worldwide. Atherosclerosis which is the condition of plaque buildup on the inside of the coronary artery wall is the main cause of CHD. Rupture of unstable atherosclerotic coronary plaque is known to be the cause of acute coronary syndrome. The composition of plaque is important for detection of plaque vulnerability. Due to prognostic importance of early stage identification, non-invasive assessment of plaque characterization is necessary. Computed tomography (CT) has emerged as a non-invasive alternative to coronary angiography. Recently, dual energy CT (DECT) coronary angiography has been performed clinically. DECT scanners use two different X-ray energies in order to determine the energy dependency of tissue attenuation values for each voxel. They generate virtual monochromatic energy images, as well as material basis pair images. The characterization of plaque components by DECT is still an active research topic since overlap between the CT attenuations measured in plaque components and contrast material shows that the single mean density might not be an appropriate measure for characterization. This dissertation proposes feature extraction, feature selection and learning strategies for supervised characterization of coronary atherosclerotic plaques. In my first study, I proposed an approach for calcium quantification in contrast-enhanced examinations of the coronary arteries, potentially eliminating the need for an extra non-contrast X-ray acquisition. The ambiguity of separation of calcium from contrast material was solved by using virtual non-contrast images. Additional attenuation data provided by DECT provides valuable information for separation of lipid from fibrous plaque since the change of their attenuation as the energy level changes is different. My second study proposed these as the input to supervised learners for a more precise classification of lipid and fibrous plaques. My last study aimed at automatic segmentation of coronary arteries characterizing plaque components and lumen on contrast enhanced monochromatic X-ray images. This required extraction of features from regions of interests. This study proposed feature extraction strategies and selection of important ones. The results show that supervised learning on the proposed features provides promising results for automatic characterization of coronary atherosclerotic plaques by DECT. / Dissertation/Thesis / Ph.D. Bioengineering 2013
170

Seleção supervisionada de características por ranking para processar consultas por similaridade em imagens médicas / Supervised feature selection by ranking to process similarity queries in medical images

Gabriel Efrain Humpire Mamani 05 December 2012 (has links)
Obter uma representação sucinta e representativa de imagens médicas é um desafio que tem sido perseguido por pesquisadores da área de processamento de imagens médicas com o propósito de apoiar o diagnóstico auxiliado por computador (Computer Aided Diagnosis - CAD). Os sistemas CAD utilizam algoritmos de extração de características para representar imagens, assim, diferentes extratores podem ser avaliados. No entanto, as imagens médicas contêm estruturas internas que são importantes para a identificação de tecidos, órgãos, malformações ou doenças. É usual que um grande número de características sejam extraídas das imagens, porém esse fato que poderia ser benéfico, pode na realidade prejudicar o processo de indexação e recuperação das imagens com problemas como a maldição da dimensionalidade. Assim, precisa-se selecionar as características mais relevantes para tornar o processo mais eficiente e eficaz. Esse trabalho desenvolveu o método de seleção supervisionada de características FSCoMS (Feature Selection based on Compactness Measure from Scatterplots) para obter o ranking das características, contemplando assim, o que é necessário para o tipo de imagens médicas sob análise. Dessa forma, produziu-se vetores de características mais enxutos e eficientes para responder consultas por similaridade. Adicionalmente, foi desenvolvido o extrator de características k-Gabor que extrai características por níveis de cinza, ressaltando estruturas internas das imagens médicas. Os experimentos realizados foram feitos com quatro bases de imagens médicas do mundo real, onde o k-Gabor sobressai pelo desempenho na recuperação por similaridade de imagens médicas, enquanto o FSCoMS reduz a redundância das características para obter um vetor de características menor do que os métodos de seleção de características convencionais e ainda com um maior desempenho em recuperação de imagens / Obtaining a representative and succinct description of medical images is a challenge that has been pursued by researchers in the area of medical image processing to support Computer-Aided Diagnosis (CAD). CAD systems use feature extraction algorithms to represent images. Thus, different extractors can be evaluated. However, medical images contain important internal structures that allow identifying tissues, organs, deformations and diseases. It is usual that a large number of features are extracted the images. Nevertheless, what appears to be beneficial actually impairs the process of indexing and retrieval of images, revealing problems such as the curse of dimensionality. Thus, it is necessary to select the most relevant features to make the process more efficient and effective. This dissertation developed a supervised feature selection method called FSCoMS (Feature Selection based on Compactness Measure from Scatterplots) in order to obtain a ranking of features, suitable for medical image analysis. Our method FSCoMS had generated shorter and efficient feature vectors to answer similarity queries. Additionally, the k-Gabor feature extractor was developed, which extracts features by gray levels, highlighting internal structures of medical images. The experiments performed were performed on four real world medical datasets. Results have shown that the k-Gabor boosts the retrieval performance, whereas the FSCoMS reduces the subsets redundancy to produce a more compact feature vector than the conventional feature selection methods and even with a higher performance in image retrieval

Page generated in 0.0398 seconds