• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Modelagem de relações simbióticas em um ecossistema computacional para otimização / Modeling of symbiotic relationships in a computational ecosystem for optimization

André, Leanderson 27 August 2015 (has links)
Made available in DSpace on 2016-12-12T20:22:53Z (GMT). No. of bitstreams: 1 LEANDERSON ANDRE.pdf: 2236080 bytes, checksum: a52e91a8b1a8e6a12497786254e94344 (MD5) Previous issue date: 2015-08-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nature offers a wide range of phenomena that inspire the development of new technologies. The researchers from the area of Natural Computing abstracts the concept of optimization from various biological processes such as the evolution of species, the behavior of social groups, the search for food, among others. Such computer systems that have a similarity to natural biological systems are called biologically plausible. The development of biologically plausible algorithms gets interesting by the fact that biological systems are able to handle extremely complex problems. In this way, symbiotic relationships are one of several phenomena that can be observed in nature. These relationships consist of interactions that organisms carry out with each other resulting in benefit or disadvantage to those involved. In an optimization context, symbiotic relationships can be used to perform exchange of information between populations of candidate solutions to a given problem. Thus, this work highlights the concepts involving symbiotic relationships that may be important for the development of computer systems to solve complex problems. The main discussion presented in this study refers to the use of symbiotic relationships between populations of candidate solutions co-evolving in an ecological context. According to the analogy, populations interact with each other according to a specific symbiotic relationship in order to evolve their solutions. The proposed model is applied to several continuous benchmark functions with a high number of dimensions (D = 200) and in several benchmark instances of the multiple knapsack problem. The results obtained so far were promising concerning the application of symbiotic relationships. Finally, the conclusions are presented and some future directions for research are suggested. / A Natureza apresenta uma grande variedade de fenômenos que inspiram o desenvolvimento de novas tecnologias. Os pesquisadores da área de Computação Natural abstraem o conceito de otimização de vários processos biológicos, tais como a evolução das espécies, comportamento de grupos sociais, busca por comida, dentre outros. Tais sistemas computacionais que apresentam uma semelhança com os sistemas biológicos naturais são chamados de biologicamente plausíveis. O desenvolvimento de algoritmos biologicamente plausíveis se torna interessante pelo fato de que os sistemas biológicos são capazes de lidar com problemas extremamente complexos. As relações simbióticas são um dos vários fenômenos que podem ser observados na natureza. Essas relações consistem de interações que organismos realizam entre si resultando em benefícios ou prejuízos para os envolvidos. Em um contexto de otimização, as relações simbióticas podem ser utilizadas para realizar a troca de informação entre populações de soluções candidatas para um dado problema. Desta forma, este trabalho destaca os conceitos que envolvem as relações simbióticas que podem ser importantes para o desenvolvimento de sistemas computacionais para a resolução de problemas complexos. A principal discussão apresentada nesse trabalho refere-se a utilização de relações simbióticas entre populações de soluções candidatas, coevoluindo em um contexto ecológico. Com essa analogia, cada população interage com uma outra de acordo com uma relação simbiótica específica, com o objetivo de evoluir suas soluções. O modelo apresentado é aplicado a várias funções benchmark contínuas com um número alto de dimensões (D = 200) e várias instâncias benchmark do problema da mochila múltipla. Os resultados obtidos se mostraram promissores considerando a aplicação das relações simbióticas. Por fim, as conclusões são apresentadas e algumas direções para pesquisas futuras são sugeridas.
152

Assistance à la construction et à la comparaison de techniques de diagnostic des connaissances / Assistance to build and compare knowledge diagnostic techniques

Lallé, Sébastien 11 December 2013 (has links)
Cette thèse aborde la thématique de la comparaison et de la construction de diagnostics des connaissances dans les Environnements Informatiques pour l'Apprentissage Humain (EIAH). Ces diagnostics sont utilisés pour déterminer si les apprenants maîtrisent ou non les connaissances ou conceptions du domaine d'apprentissage (par exemple math au collège) à partir des traces collectées par l'EIAH. Bien que ces diagnostics soient récurrents dans les EIAH, ils sont fortement liés au domaine et ne sont que peu formalisés, si bien qu'il n'existe pas de méthode de comparaison pour les positionner entre eux et les valider. Pour la même raison, utiliser un diagnostic dans deux domaines différents implique souvent de le redévelopper en partie ou en totalité, sans réelle réutilisation. Pourtant, pouvoir comparer et réutiliser des diagnostics apporterait aux concepteurs d'EIAH plus de rigueur pour le choix, l'évaluation et le développement de ces diagnostics. Nous proposons une méthode d'assistance à la construction et à la comparaison de diagnostics des connaissances, réifiée dans une première plateforme, en se basant sur une formalisation du diagnostic des connaissances en EIAH que nous avons défini et sur l'utilisation de traces d'apprenant. L'assistance à la construction se fait via un algorithme d'apprentissage semi-automatique, guidé par le concepteur du diagnostic grâce à une ontologie décrivant les traces et les connaissances du domaine d'apprentissage. L'assistance à la comparaison se fait par application d'un ensemble de critères de comparaison (statistiques ou spécifiques aux EIAH) sur les résultats des différents diagnostics construits. La principale contribution au domaine est la généricité de notre méthode, applicable à un ensemble de diagnostics différents pour tout domaine d'apprentissage. Nous évaluons notre travail à travers trois expérimentations. La première porte sur l'application de la méthode à trois domaines différents (géométrie, lecture, chirurgie) en utilisant des jeux de traces en validation croisée pour construire et appliquer les critères de comparaison sur cinq diagnostics différents. La seconde expérimentation porte sur la spécification et l'implémentation d'un nouveau critère de comparaison spécifique aux EIAH : la comparaison des diagnostics en fonction de leur impact sur une prise de décision de l'EIAH, le choix d'un type d'aide à donner à l'apprenant. La troisième expérimentation traite de la spécification et de l'ajout d'un nouveau diagnostic dans notre plateforme, en collaborant avec une didacticienne. / Comparing and building knowledge diagnostic is a challenge in the field of Technology Enhanced Learning (TEL) systems. Knowledge diagnostic aims to infer the knowledge mastered or not by a student in a given learning domain (like mathematics for high school) using student traces recorded by the TEL system. Knowledge diagnostics are widely used, but they strongly depend on the learning domain and are not well formalized. Thus, there exists no method or tool to build, compare and evaluate different diagnostics applied on a given learning domain. Similarly, using a diagnostic in two different domain usually imply to implementing almost both from scratch. Yet, comparing and reusing knowledge diagnostics can lead to reduce the engineering cost, to reinforce the evaluation and finally help knowledge diagnostic designers to choose a diagnostic. We propose a method, refine in a first platform, to assist knowledge diagnostic designers to build and compare knowledge diagnostics, using a new formalization of the diagnostic and student traces. To help building diagnostics, we used a semi-automatic machine learning algorithm, guided by an ontology of the traces and the knowledge designed by the designer. To help comparing diagnostics, we use a set of comparison criteria (either statistical or specific to the field of TEL systems) applied on the results of each diagnostic on a given set of traces. The main contribution is that our method is generic over diagnostics, meaning that very different diagnostics can be built and compared, unlike previous work on this topic. We evaluated our work though three experiments. The first one was about applying our method on three different domains and set of traces (namely geometry, reading and surgery) to build and compare five different knowledge diagnostics in cross validation. The second experiment was about designing and implementing a new comparison criteria specific to TEL systems: the impact of knowledge diagnostic on a pedagogical decision, the choice of a type of help to give to a student. The last experiment was about designing and adding in our platform a new diagnostic, in collaboration with an expert in didactic.
153

Uma formulação por média-variância multi-período para o erro de rastreamento em carteiras de investimento. / A multi-period mean-variance formulation of tracking error for portfolio selection.

Yeison Andres Zabala 24 February 2016 (has links)
Neste trabalho, deriva-se uma política de escolha ótima baseada na análise de média-variância para o Erro de Rastreamento no cenário Multi-período - ERM -. Referindo-se ao ERM como a diferença entre o capital acumulado pela carteira escolhida e o acumulado pela carteira de um benchmark. Assim, foi aplicada a metodologia abordada por Li-Ng em [24] para a solução analítica, obtendo-se dessa maneira uma generalização do caso uniperíodo introduzido por Roll em [38]. Em seguida, selecionou-se um portfólio do mercado de ações brasileiro baseado no fator de orrelação, e adotou-se como benchmark o índice da bolsa de valores do estado de São Paulo IBOVESPA, além da taxa básica de juros SELIC como ativo de renda fixa. Dois casos foram abordados: carteira composta somente de ativos de risco, caso I, e carteira com um ativo sem risco indexado à SELIC - e ativos do caso I (caso II). / In this work, an optimal policy for portfolio selection based on mean-varian e analysis for the multi-period tracking error - ERM - was derived. ERM is understood as the difference between the capital raised by the selected portfolio and benchmark portfolio. Thus, the methodology discussed by Li-Ng in [24] for analytical solution was applied, generalizing the single period case introduced by Roll in [38]. Then, it was selected a portfolio from the Brazilian stock trading based on the correlation factor, and adopted as benchmark the index of the stock trading of São Paulo State IBOVESPA, and the basic interest rate SELIC as fixed income asset. Two cases were dealt: portfolio composed of risky assets only, case I, and portfolio with a risk-free asset - indexed to SELIC - and assets of the case I (case II).
154

Virtual Reality und Product Lifecycle Management – Entwicklung eines durchgängigen Prozesses für die BSH Bosch und Siemens Hausgeräte GmbH

Rehfeld, Ingolf, Wunderlich, Jan 25 September 2017 (has links) (PDF)
Weltweit führende Hersteller von Markenprodukten sind ihrem Anspruch verpflichtet, Benchmark der Branche für Qualität, Design, Innovation und Gebrauchswert ihrer Produkte zu sein. Dieses Ziel zu wettbewerbsfähigen Preisen und in immer kürzeren Innovationszyklen zu erreichen, ist kein zufälliges Ergebnis, sondern das Resultat visionärer Unternehmensstrategien, die schon früh auf standardisierte Produktentstehungsprozesse und durchgängige, unterstützende IT-Systeme im Rahmen eines konsequenten Product Lifecycle Management (PLM) setzen.
155

Construction d'indicateurs de toxicites cumulees : cas des composes organiques semi volatils dans les environnements interieurs. / Derivation cumulative toxicity indicators : case of semi volatile organic compounds from indoor environments

Fournier, Kevin 09 October 2015 (has links)
Les composés organiques semi volatils (COSV) sont largement présents dans les environnements intérieurs et sont suspectés d’être repro- ou neurotoxiques, mais peu de données sont disponibles quant à leur toxicité en mélanges. L’objectif de cette thèse est de proposer des indicateurs de toxicité cumulés pour les COSV détectés dans les logements français, dans un cadre d’évaluation des risques sanitaires cumulés. Les COSV ont été regroupés en fonction de leurs modes d’action communs, en lien avec les effets reprotoxiques (diminution de la concentration de testostérone sérique) et neurotoxiques (diminution de la viabilité neuronale). Des benchmark doses (BMD) ont ensuite été estimées par modélisation (modèle de Hill, PROAST, RIVM) des relations dose-réponse de la littérature décrivant la réponse d’intérêt. Des BMD comparables ont pu être estimées seulement pour 6 des 19 COSV reprotoxiques induisant une diminution de testostérone de 10 ou 50 % chez le rat adulte exposé par voie orale. Les facteurs de toxicité relatifs (RPF) estimés à partir des BMD sont sensiblement les mêmes en fonction du niveau de réponse (de 1600 pour le B(a)P à 0,1 pour le BBP), excepté pour le biphénol A qui passe de 7E+6 à 180. Considérant la mort neuronale in vitro, des BMD ont pu être estimées pour 13 COSV neurotoxiques, à partir de données provenant de différentes lignées et espèces. Les BMD équivalent à un niveau de réponse de 10 % s’échelonnent de 0,07 (PCB-153) à 95 µM (diazinon). L’originalité de ce travail repose sur le regroupement de composés de familles chimiques différentes qui constituent des contaminations réelles de notre environnement. Si l’estimation des quelques BMD a été possible à partir des données de la littérature, de nombreuses limites méthodologiques conduisent à émettre des recommandations en particulier sur la standardisation des protocoles expérimentaux et la disponibilité des résultats sous une forme adaptée à la modélisation de la relation dose-réponse. / Semi-volatile organic compounds (SVOCs) are widely present in indoor environments and are suspected to be repro- or neurotoxic but little is known on the health impact on SVOC mixtures. The objective of this work is to derive cumulative toxicity indicators for SVOCs detected in French dwellings in carrying forward a cumulative health risk assessment. SVOCs were grouped according to their repro- and neurotoxic common modes of action (i.e. decrease in serum testosterone concentrations, decrease in neuronal viability). Benchmark doses (BMDs) were then estimated by modeling dose-response relationships from scientific literature (Hill models, PROAST, RIVM). Comparable BMDs were estimated only for 6 of the 19 reprotoxic SVOCs which are responsible to 10 or 50% decrease in testosterone in adult male rats orally exposed. Estimated relative potency factors (RPFs) from BMDs are similar according to the response level (from 1600 for the B(a)P to 0.1 for the BBP), excepted for bisphenol A moving from 7E+6 to 180. For in vitro neuronal death, BMDs were estimated for 13 neurotoxic SVOCs using data from different cell lines and species. BMDs equivalent to a 10% of response range from 0.07 (PCB-153) to 95 µM (diazinon). The originality of this work is the grouping of compounds from different chemical families which we are really exposed to. BMDs estimation from published data was possible but many methodological limitations lead us to put forward recommendations especially on the standardization of experimental protocols and the availability of results in adapted format for dose-response relationship modeling.
156

Estudo, avaliação e comparação de técnicas de detecção não supervisionada de outliers / Study, evaluation and comparison of unsupervised outlier detection techniques

Guilherme Oliveira Campos 05 March 2015 (has links)
A área de detecção de outliers (ou detecção de anomalias) possui um papel fundamental na descoberta de padrões em dados que podem ser considerados excepcionais sob alguma perspectiva. Detectar tais padrões é relevante de maneira geral porque, em muitas aplicações de mineração de dados, tais padrões representam comportamentos extraordinários que merecem uma atenção especial. Uma importante distinção se dá entre as técnicas supervisionadas e não supervisionadas de detecção. O presente projeto enfoca as técnicas de detecção não supervisionadas. Existem dezenas de algoritmos desta categoria na literatura e novos algoritmos são propostos de tempos em tempos, porém cada um deles utiliza uma abordagem própria do que deve ser considerado um outlier ou não, que é um conceito subjetivo no contexto não supervisionado. Isso dificulta sensivelmente a escolha de um algoritmo em particular em uma dada aplicação prática. Embora seja de conhecimento comum que nenhum algoritmo de aprendizado de máquina pode ser superior a todos os demais em todos os cenários de aplicação, é uma questão relevante se o desempenho de certos algoritmos em geral tende a dominar o de determinados outros, ao menos em classes particulares de problemas. Neste projeto, propõe-se contribuir com o estudo, seleção e pré-processamento de bases de dados que sejam apropriadas para se juntarem a uma coleção de benchmarks para avaliação de algoritmos de detecção não supervisionada de outliers. Propõe-se ainda avaliar comparativamente o desempenho de métodos de detecção de outliers. Durante parte do meu trabalho de mestrado, tive a colaboração intelectual de Erich Schubert, Ira Assent, Barbora Micenková, Michael Houle e, principalmente, Joerg Sander e Arthur Zimek. A contribuição deles foi essencial para as análises dos resultados e a forma compacta de apresentá-los. / The outlier detection area has an essential role in discovering patterns in data that can be considered as exceptional in some perspective. Detect such patterns is important in general because, in many data mining applications, such patterns represent extraordinary behaviors that deserve special attention. An important distinction occurs between supervised and unsupervised detection techniques. This project focuses on the unsupervised detection techniques. There are dozens of algorithms in this category in literature and new algorithms are proposed from time to time, but each of them uses its own approach of what should be considered an outlier or not, which is a subjective concept in the unsupervised context. This considerably complicates the choice of a particular algorithm in a given practical application. While it is common knowledge that no machine learning algorithm can be superior to all others in all application scenarios, it is a relevant question if the performance of certain algorithms in general tends to dominate certain other, at least in particular classes of problems. In this project, proposes to contribute to the databases study, selection and pre-processing that are appropriate to join a benchmark collection for evaluating unsupervised outlier detection algorithms. It is also proposed to evaluate comparatively the performance of outlier detection methods. During part of my master thesis, I had the intellectual collaboration of Erich Schubert, Ira Assent, Barbora Micenková, Michael Houle and especially Joerg Sander and Arthur Zimek. Their contribution was essential for the analysis of the results and the compact way to present them.
157

Leistungsoptimierung der persistenten Datenverwaltung in DSP-Architekturen zur Live-Analyse von Sensordaten

Weißbach, Manuel 28 October 2021 (has links)
Aufgrund der in vielen Bereichen stets wachsenden Menge an zu verarbeitenden Daten haben sich Big-Data-Anwendungen in den letzten Jahren zunehmend verbreitet. Twitter gab bereits im Jahr 2011 an, täglich 15 Millionen URLs in Echtzeit zu untersuchen, um die Verbreitung von Spamlinks zu unterbinden [1]. Facebook verarbeitet pro Minute über vier Millionen „Gefällt mir“-Klicks und verwaltet über 300 Petabyte Daten [2]. Über das Businessportal LinkedIn wurden 2011 rund eine Milliarde Nachrichten pro Tag zugestellt, 2015 waren es laut Angaben des Unternehmens bereits 1,1 Billionen täglich versendete Nachrichten [3]. Diesem starken Anstieg liegt ein exponentielles Wachstum zugrunde, das für Big Data typisch ist. Gartner definiert den Begriff „Big Data“ auf Basis seiner spezifischen Eigenschaften, die in englischer Sprache auch als die „drei V´s“ bezeichnet werden: „Volume“, „Variety“ und „Velocity“ [4]. Neben der enormen Menge an zu verarbeitenden Daten („Volume“) und ihrer Vielfalt und Unstrukturiertheit („Variety“), ist demnach auch die Geschwindigkeit („Velocity“), in der die Daten generiert werden, ein wesentliches Merkmal von Big Data [5, 6]. Soll trotz der ständigen und immer schneller werdenden Generierung neuer Daten ein Verarbeitungsrückstau vermieden werden, so folgt daraus auch die Notwendigkeit, die kontinuierlich wachsenden Datenmengen immer schneller zu verarbeiten.
158

Virtual Reality und Product Lifecycle Management – Entwicklung eines durchgängigen Prozesses für die BSH Bosch und Siemens Hausgeräte GmbH

Rehfeld, Ingolf, Wunderlich, Jan 25 September 2017 (has links)
Weltweit führende Hersteller von Markenprodukten sind ihrem Anspruch verpflichtet, Benchmark der Branche für Qualität, Design, Innovation und Gebrauchswert ihrer Produkte zu sein. Dieses Ziel zu wettbewerbsfähigen Preisen und in immer kürzeren Innovationszyklen zu erreichen, ist kein zufälliges Ergebnis, sondern das Resultat visionärer Unternehmensstrategien, die schon früh auf standardisierte Produktentstehungsprozesse und durchgängige, unterstützende IT-Systeme im Rahmen eines konsequenten Product Lifecycle Management (PLM) setzen.
159

Optimering av enzym-baserad immunohistokemisk metod i jämförelse mot immunofluorescens med fryssnittade hudbiopsier / Optimization of enzyme-based immunohistochemical method in comparison with immunofluorescence with frozen-cut skin biopsies

Johansson, Karin January 2022 (has links)
Med hjälp av immunohistokemi kan antigen och antikroppar som är bundna till vävnaden detekteras. Autoimmuna hudsjukdomar är exempel på sjukdomar som diagnosticeras med immunohistokemi. På Falu lasarett användes immunofluorescens för diagnostik av autoimmuna hudsjukdomar. Syftet med denna studie var att optimera enzym-baserad immunohistokemi för epitoperna IgA, IgG, IgM och C3 och jämföra med immunofluorescens vad gäller specificitet, signalstyrka och upplösning. Vävnader som analyserades var tonsill, lever, tarmslemhinna och hudbiopsier. Fixering gjordes i 4% formaldehyd av vävnaderna som infärgades med ultraView DAB och ultraView DAB med FITC. Vävnad som infärgades med DIF med FITC sköljdes enbart i Reaction Buffer. Vävnaderna färgades enligt protokollen ultraView DAB, ultraView DAB med FITC och DIF med FITC. En manuell infärgning med aktiverat DAB utfördes. Resultatet visade bakgrundsinfärgning för samtliga infärgningar. DIF med FITC var tydligare infärgad och lättare att skilja mellan specifik och ospecifik infärgning. Det är svårt att optimera enzym-baserad IHC och epitopen IgA hade generellt starkare infärgning jämfört med epitopen IgM. För att erhålla tillförlitliga resultat krävs det att flera vävnadsprover analyseras. / Using immunohistochemistry, antigens and antibodies bound to the tissue can be detected. Autoimmune skin diseases are examples of diseases that are diagnosed with immunohistochemistry. At Falu Hospital, immunofluorescence was used to diagnose autoimmune skin diseases. The aim of this study was to optimize enzyme-based immunohistochemistry for the epitopes IgA, IgG, IgM and C3 and to compare with immunofluorescence in terms of specificity, signal strength and resolution. Tissues analyzed were tonsil, liver, intestinal mucosa and skin biopsies. Fixation was done in 4% formaldehyde of the tissues stained with ultraView DAB and ultraView DAB with FITC. Tissue stained with DIF with FITC was rinsed in Reaction Buffer only. A manual staining with activated DAB was performed. The result showed background staining for all stainings. DIF with FITC was more clearly stained and easier to distinguish between specific and nonspecific staining. It is difficult to optimize enzyme-based IHC and the epitope IgA generally had stronger staining compared to the epitope IgM. To obtain reliable results, several tissue samples must be analyzed.
160

An Instance Data Repository for the Round-robin Sports Timetabling Problem

Van Bulck, David, Goossens, Dries, Schönberger, Jörn, Guajardo, Mario 11 August 2020 (has links)
The sports timetabling problem is a combinatorial optimization problem that consists of creating a timetable that defines against whom, when and where teams play games. This is a complex matter, since real-life sports timetabling applications are typically highly constrained. The vast amount and variety of constraints and the lack of generally accepted benchmark problem instances make that timetable algorithms proposed in the literature are often tested on just one or two specific seasons of the competition under consideration. This is problematic since only a few algorithmic insights are gained. To mitigate this issue, this article provides a problem instance repository containing over 40 different types of instances covering artificial and real-life problem instances. The construction of such a repository is not trivial, since there are dozens of constraints that need to be expressed in a standardized format. For this, our repository relies on RobinX, an XML-supported classification framework. The resulting repository provides a (non-exhaustive) overview of most real-life sports timetabling applications published over the last five decades. For every problem, a short description highlights the most distinguishing characteristics of the problem. The repository is publicly available and will be continuously updated as new instances or better solutions become available.

Page generated in 0.0532 seconds