• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 121
  • 40
  • 40
  • 37
  • 37
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 4
  • 3
  • 3
  • Tagged with
  • 657
  • 96
  • 90
  • 81
  • 67
  • 65
  • 58
  • 54
  • 51
  • 48
  • 47
  • 37
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Arranging simple neural networks to solve complex classification problems

Ghaderi, Reza January 2000 (has links)
In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the problem into simpler sub-problems, 2) solving sub-problems with simpler systems (sub-systems) and 3) combining the results of sub-systems to solve the original problem. In a classification task we may have "label complexity" which is due to high number of possible classes, "function complexity" which means the existence of complex input-output relationship, and "input complexity" which is due to requirement of a huge feature set to represent patterns. Error Correcting Output Code (ECOC) is a technique to reduce the label complexity in which a multi-class problem will be decomposed into a set of binary sub-problems, based oil the sequence of "0"s and "1"s of the columns of a decomposition (code) matrix. Then a given pattern can be assigned to the class having minimum distance to the results of sub-problems. The lack of knowledge about the relationship between distance measurement and class score (like posterior probabilities) has caused some essential shortcomings to answering questions about "source of effectiveness", "error analysis", " code selecting ", and " alternative reconstruction methods" in previous works. Proposing a theoretical framework in this thesis to specify this relationship, our main contributions in this subject are to: 1) explain the theoretical reasons for code selection conditions 2) suggest new conditions for code generation (equidistance code)which minimise reconstruction error and address a search technique for code selection 3) provide an analysis to show the effect of different kinds of error on final performance 4) suggest a novel combining method to reduce the effect of code word selection in non-optimum codes 5) suggest novel reconstruction frameworks to combine the component outputs. Some experiments on artificial and real benchmarks demonstrate significant improvement achieved in multi-class problems when simple feed forward neural networks are arranged based on suggested framework To solve the problem of function complexity we considered AdaBoost, as a technique which can be fused with ECOC to overcome its shortcoming for binary problems. And to handle the problems of huge feature sets, we have suggested a multi-net structure with local back propagation. To demonstrate these improvements on realistic problems a face recognition application is considered. Key words: decomposition/ reconstruction, reconstruction error, error correcting output codes, bias-variance decomposition.
32

Evaluating loss minimization in multi-label classification via stochastic simulation using beta distribution

MELLO, L. H. S. 20 May 2016 (has links)
Made available in DSpace on 2016-08-29T15:33:25Z (GMT). No. of bitstreams: 1 tese_9881_Ata de defesa.pdf: 679815 bytes, checksum: bd13283b6e7f400de68b79f04cf0b4a9 (MD5) Previous issue date: 2016-05-20 / The objective of this work is to present the effectiveness and efficiency of algorithms for solving the loss minimization problem in Multi-Label Classification (MLC). We first prove that a specific case of loss minimization in MLC isNP-complete for the loss functions Coverage and Search Length, and therefore,no efficient algorithm for solving such problems exists unless P=NP. Furthermore, we show a novel approach for evaluating multi-label algorithms that has the advantage of not being limited to some chosen base learners, such as K-neareast Neighbor and Support Vector Machine, by simulating the distribution of labels according to multiple Beta Distributions.
33

Meta-aprendizado para análise de desempenho de métodos de classificação multi-label

PINTO, Eduardo Ribas 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:52:45Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Nos últimos anos, têm surgido diversas aplicações que utilizam algoritmos de Aprendizagem de Máquina Supervisionada para resolver problemas de classificação envolvendo diversos domínios. No entanto, muitas destas aplicações se restringem a utilizarem algoritmos singlelabel, ou seja, que atribuem apenas uma classe a uma dada instância. Tais aplicações se tornam inadequadas quando essa mesma instância, no mundo real, pertence a mais de uma classe simultaneamente. Tal problema é denominado na literatura como Problema de Classificação Multi-Label. Atualmente, há uma diversidade de estratégias voltadas para resolver problemas multi-label. Algumas delas fazem parte de um grupo denominado de Métodos de Transformação de Problemas. Essa denominação vem do fato de esse tipo de estratégia buscar dividir um problema de classificação multi-label em vários problemas single-label de modo a reduzir sua complexidade. Outras buscam tratar conjuntos de dados multi-label diretamente, sendo conhecidas como Métodos de Adaptação de Algoritmos. Em decorrência desta grande quantidade de métodos multi-label existentes, é bastante difícil escolher o mais adequado para um dado domínio. Diante disso, a presente dissertação buscou atingir dois objetivos: realização de um estudo comparativo entre métodos de transformação de problemas muito utilizados na atualidade e a aplicação de duas estratégias de Meta-Aprendizado em classificação multi-label, a fim de predizer, com base nas características descritivas de um conjunto de dados, qual algoritmo é mais provável de obter um desempenho melhor em relação aos demais. As abordagens de Meta-aprendizado utilizadas no nosso trabalho foram derivadas com base em técnicas de análise de correlação e mineração de regras. O uso de Meta-Aprendizado no contexto de classificação multi-label é original e apresentou resultados satisfatórios nos nossos experimentos, o que aponta que este pode ser um guia inicial para o desenvolvimento de pesquisas futuras relacionadas
34

Marcas próprias de supermercado: um estudo com consumidoras na cidade de São Paulo / Supermarket private labels: a study with consumers in São Paulo city - Brazil

Marcelo Felippe Figueira Junior 27 August 2008 (has links)
O objetivo deste trabalho é investigar o consumo de marcas próprias em supermercados, com enfoque nas escolhas do consumidor entre a marca própria e a marca líder . Pretende-se captar a percepção dos consumidores relativamente a preços e qualidade dos produtos fornecidos com a marca dos varejistas. Analisar com qual freqüência os produtos são consumidos, investigar qual o diferencial de preço que o consumidor está disposto a pagar pela marca própria em relação à marca líder, através do método de pesquisa de Grupos de Foco (Focus Group). A decisão de estudar a escolha de marcas próprias pelos consumidores em supermercados deve-se à necessidade de se conhecer esta modalidade específica de consumo, identificando a percepção, o comportamento e o processo de decisão do público supermercadista de modo a trazer contribuições teóricas para Administração de Varejo a partir da pesquisa. / ABSTRACT The objective of this work is to investigate the consumption of private label products in supermarkets, focusing on the consumers\' choices between private label and leading brand products. It is intended to catch the perception of the consumers relatively to the prices and product quality associated to the retailers\' brands. The Focus Group research method is used in order to analyze with which frequency the products are consumed and to investigate which is the price gap that the consumers are willing to pay for private label in relation the leading brands. The decision to study the choice of private label by supermarket consumers is due to the need of knowing, through this research, this specific type of consumption, identifying the perception, the behavior and the decision process of the supermarket clientele in order to bring theoretical contributions to Retail Management body of knowledge.
35

Interferometric imaging for high sensitivity multiplexed molecular measurements

Marn, Allison M. 25 September 2021 (has links)
The diagnostic and pharmaceutical industries rely on tools for characterizing, discovering, and developing bio-molecular interactions. Diagnostic assays require high affinity capture probes and binding specificity for accurate detection of biomarkers. Selection of drug candidates depends on the drug residency time and duration of drug action. Further, biologic drugs can induce anti-drug antibodies, which require characterization to determine the impact on the drug safety and efficacy. Label-free biosensors are an attractive solution for analyzing these and other bio-molecular interactions because they provide information based on the characteristics of the molecules themselves, without disturbing the native biological systems by labeling. While label-free biosensors can analyze a broad range of analytes, small molecular weight analytes (molecular weight < 1kDa) are the most challenging. Affinity measurements for small molecular weight targets require high sensitivity and long-term signal stability. Additional difficulties occur with different liquid refractive indices that result from to temperature, composition, or matrix effects of sensor surfaces. Some solutions utitlize strong solvents to increase the solubility of small molecules, which also alter the refractive index. Moreover, diagnostics require affinity measurements in relevant solutions, of various refractive indices. When a refractive index difference exists between the analyte solution and the wash buffer, a background signal is generated, referred to as the bulk effect, obscuring the small signal due to surface binding in the presence of large fluctuations due to variations of the optical refractive index of the solutions. The signal generated by low molecular weight analytes is small, and conventional wisdom tends toward signal amplification or resonance for detection of these small signals. With this approach, Surface Plasmon Resonance (SPR) has become the gold standard in affinity measurement technologies. SPR is an expensive and complex technology that is highly susceptible to the bulk effect. SPR uses a reference channel to correct for the bulk effect in post-processing, which requires high precision and sophisticated temperature control, further increasing the cost and complexity. Additionally, multiplexing is desirable as it allows for simultaneous measurements of multiple ligands; however, multiplexing is only possible in the imaging modality of SPR, which has lower sensitivity and difficulty with referencing. The Interferometric Reflectance Imaging Sensor (IRIS) is a low-cost, optical label-free bio-molecular interaction analysis technology capable of providing precise binding affinity measurements; however, limitations in sensitivity and usability have previously prevented its widespread adaptation. Overcoming these limitations requires the implementation of automation, compact and easy-to-use instrumentation, and increased sensitivity. Here, we explore methods for improved sensitivity and usability. We achieve noise reduction and elimination of solution artifacts (bulk effect) through engineered illumination uniformity and temporal and spatial image processing. To validate these methods, we experimentally analyze small molecule molecular interactions to demonstrate highly sensitive kinetic binding measurements, independent of solution refractive index. / 2023-09-24T00:00:00Z
36

Virtuální privátní sítě na bázi technologie MPLS / MPLS based virtual private networks

Daněk, Michal January 2017 (has links)
Master thesis deals with architecture of network based on multiprotocol label switching technology (MPLS). Work also describes use of this technology for point to point or multipoint connections based on network or data link layer. The practical part is focused on design of laboratory task which is aimed to configuration of virtual private LAN service (VPLS). This technology emulates multipoint connection based on the data link layer.
37

Off-Label and Unlicensed Medication Use and Associated Adverse Drug Events in a Pediatric Emergency Department

Phan, Hanna, Leder, Marc, Fishley, Matthew, Moeller, Matthew, Nahata, Milap 01 June 2010 (has links)
Objectives: The study objectives were to (1) determine the types and frequency of off-label (OL) or unlicensed (UL) medications used in a pediatric emergency department (PED) and before admission, (2) describe OL/UL-associated adverse drug events (ADEs) resulting in admission to the PED and those occurring during patient care in PED, and (3) determine the outcomes of these ADEs. Methods: Medical records of patients 18 years or younger admitted to the PED over a 5-month period were reviewed. Off-label/UL use of medications was determined based on Food and Drug Administration-approved labeling. The Adverse Drug Reaction Probability Scale was used to determine ADE causality. Data were analyzed using descriptive statistics. Results: A total of 2191 patients with 6675 medication orders were evaluated. About 26.2% (n = 1712) of medication orders were considered as OL/ UL use; 70.5% (n = 1208) of these medications were ordered as part of treatment in the PED, and the remaining 29.5% (n = 504) were home medications before their PED evaluation. Inhaled bronchodilators (30.4%), antimicrobials (14.8%), and antihistamines/antiemetics (9.1%) were the most common OL/UL medication classes. The frequency of ADEs among licensed medication use was greater compared with OL/UL use by 2-fold. Reported overall rate of ADEs was 0.6% (n = 40). Of these 40 ADEs, 5 resulted from the use of an OL/UL medication, 3 from home medication use, and 2 from PED-prescribed medications. Conclusions: The frequency of reported ADEs associated with OL/UL medications was less than the frequency of ADEs from licensed medication use, with overall ADE frequency of less than 1%.
38

Effects of the Diagnostic Label "ADHD " on Peer Judgment

Toone, Jared 01 May 2006 (has links)
Diagnostic labels are frequently used with children exhibiting symptoms of learning and behavioral disorders. The effect that such labels have on the labeled children as well as their peers is not completely understood. In the present study, the effects of the label "ADHD" on peer acceptance were examined. Fourth- and fifth-grade boys and girls viewed a video of a peer listening to teacher instruction and working on a worksheet. For half of the participants, the child in the video was labeled as having ADHD, while the other participants were told nothing about the child. After viewing the video, the children responded to a questionnaire assessing the likelihood that they would befriend the peer in the video. An analysis of variance revealed that the label resulted in significantly lower friendship ratings. Gender of the participant was not found to impact peer ratings. These results indicate that parents, professionals, and children need to be educated about the effects that labels may have and that labels need to be used with caution. Labeled children may also benefit from counseling about how others may respond to their label.
39

Optimal Semantic Labeling of Social Network Clusters

Peng, Shuyue 13 October 2014 (has links)
No description available.
40

Benchmarking Methods For Predicting Phenotype Gene Associations

Tyagi, Tanya 16 September 2020 (has links)
Assigning human genes to diseases and related phenotypes is an important topic in modern genomics. Human Phenotype Ontology (HPO) is a standardized vocabulary of phenotypic abnormalities that occur in human diseases. Computational methods such as label-propagation and supervised-learning address challenges posed by traditional approaches such as manual curation to link genes to phenotypes in the HPO. It is only in recent years that computational methods have been applied in a network-based approach for predicting genes to disease-related phenotypes. In this thesis, we present an extensive benchmarking of various computational methods for the task of network-based gene classification. These methods are evaluated on multiple protein interaction networks and feature representations. We empirically evaluate the performance of multiple prediction tasks using two evaluation experiments: cross-fold validation and the more stringent temporal holdout. We demonstrate that all of the prediction methods considered in our benchmarking analysis have similar performance, with each of the methods outperforming a random predictor. / Master of Science / For many years biologists have been working towards studying diseases, characterizing dis- ease history and identifying what factors and genetic variants lead to diseases. Such studies are critical to working towards the advanced prognosis of diseases and being able to iden- tify targeted treatment plans to cure diseases. An important characteristic of diseases is that they can be expressed by a set of phenotypes. Phenotypes are defined as observable characteristics or traits of an organism, such as height and the color of the eyes and hair. In the context of diseases, the phenotypes that describe diseases are referred to as clinical phenotypes, with some examples being short stature, abnormal hair pattern, etc. Biologists have identified the importance of deep phenotyping, which is defined as a concise analysis that gathers information about diseases and their observed traits in humans, in finding genetic variants underlying human diseases. We make use of the Human Phenotype Ontology (HPO), a standardized vocabulary of phenotypic abnormalities that occur in human diseases. The HPO provides relationships between phenotypes as well as associations between phenotypes and genes. In our study, we perform a systematic benchmarking to evaluate different types of computational approaches for the task of phenotype-gene prediction, across multiple molecular networks using various feature representations and for multiple evaluation strategies.

Page generated in 0.03 seconds