• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 21
  • 21
  • 16
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 72
  • 48
  • 33
  • 29
  • 28
  • 26
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Analýza kolektivního investování v podmínkách ČR / Analysis of the collective investment in the Czech republic

BAŠTA, Kristian January 2012 (has links)
The thesis deals with the collective investing. The aim is to assess the current state of collective investment in the Czech Republic. The are selected characteristics of the reference sector which are closely analyzed. The main aspects discussed in this work will be liquidity, cost, and in particular the profitability of selected products. On the basis of available information is created a portfolio of collective investment products selected for age and social groups to ensure income for retirement or accumulation of funds for future investments. Attention is paid to selected financial products company on the Czech market. These products include pension funds, investment life insurance and unit trust.
112

Processos de subjetivação e percursos de sentiduralização na discursividade literária em Lygia Fagundes Telles

Rosa, Ismael Ferreira 10 December 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This research aims at analyzing the literary discursivity at Lygia Fagundes Telles, observing its linguistic, historical and subjective tridimensionalities, especially that related to the production of senses and subjects in the woman field from three novels: Ciranda de Pedra (1954), Verão no Aquário (1963) e As Horas Nuas (1989). Based on Discourse Analysis, particularly on studies of Pêcheux about sense and subject, and based on dialogical-polyphonic discussions on language, literature and subject of the Bakhtin Circle conjugated to Foucault s understanding about literary universe and its discursive practices, as well as notions like literary discourse and paratopos of Maingueneau and the becoming notion by Deleuze and Guattari, besides Barthes and Blanchot s discussions, we intend to expose a reader-look at identity processes and subjectmental and sensemental (de)constructions of this following analytical cut: Virgínia, Raíza e Rosa Ambrósio. We propose a theoretical-analytical of interpretive nature research from which cuts of novels linguistic materiality serve as a starting point to approach the way how the literary discursivity moves and produces subjects and senses. Thus, based on Santos and Ferreira-Rosa, we built the analytical and methodological device nonessential in triple-helix, through which we associated analytical centers, in accordance with the cut technique proposed by Orlandi and with recurrence and regularity criteria, establishing combinations of constituted, constituent and constitutive elements. Instituting three helix, a subjectmental one, a sensemental one and an aesthetical one, whose rotating and contrarotating movements represent the tridimensionality of the literary discursivity operation in its sensementalization, we searched to describe and to analyze the construction process of those cut subjects. A process that revealed the establishment of an enunciative subjectmental instance woman characterized by the decentering, the identity fragmentation. An establishment signaled by sensemental courses that depart from a hard side, pass by the fluid and reach the liberty in a way non-linearly and non-continuously. In fact, courses marked by dialogues, silences, acts and actions-forces that build senses of submission, confutation, contradiction and coercion. / Este trabalho tem por escopo analítico sopesar a discursividade literária na produção de Lygia Fagundes Telles, observando a tridimensão do linguístico, do histórico e do subjetivo, sobretudo, no que concerne à construção de sentidos e sujeitos do/no campo feminil em três romances: Ciranda de Pedra (1954), Verão no Aquário (1963) e As Horas Nuas (1989). Fundamentados na Análise do Discurso (AD), em especial, nos estudos de Michel Pêcheux sobre as noções de sentido e sujeito, e nas discussões dialógico-polifônicas sobre linguagem, literatura e sujeito do Círculo de Bakhtin, conjugadas à compreensão de Foucault acerca do universo literário e de suas práticas discursivas, como também às extensões teóricas de discurso literário e paratopia de Maingueneau e à noção de devir de Deleuze e Guattari, não nos esquivando de dialogar com Barthes e Blanchot, alvitramos lançar um olhar-leitor sobre os processos identitários e (des)construções sujeitudinais e sentidurais do seguinte recorte de análise: os sujeitos Virgínia, Raíza e Rosa Ambrósio. Propomos uma pesquisa teórico-analítica de cunho interpretativista, em que recortes da materialidade linguística dos romances servem de ponto de partida para a abordagem da maneira como funciona a discursividade literária produzindo sujeitos e sentidos. Para tanto, embasados em Santos e Ferreira-Rosa, construímos o dispositivo analítico-metodológico nonessencial em triplo-hélice por meio qual associamos polos analíticos, instaurando, em consonância à técnica de recorte proposta por Orlandi e aos critérios de recorrência e regularidade, combinações entre elementos constituintes, constituídos e constitutivos. Instituindo uma hélice sujeitudinal, uma sentidural e outra estetical, cujos movimentos rotativos e contrarrotativos representam a tridimensionalidade do funcionamento da discursividade literária em sua sentiduralização, buscamos descrever e analisar o processo de construção dos sujeitos discursivos recortados. Um processo que revelou a instauração de uma instância enunciativa sujeitudinal mulher no crivo do descentramento, do desdobramento, da fragmentação identitária, marcada por percursos sentidurais que partem do rígido, passando pelo fluido até o libertário, de modo descontínuo e deslinear, balizados por diálogos, silêncios, atos-ações e forças que constroem os sentidos da submissão, confutação, contradição e coerção. / Doutor em Estudos Linguísticos
113

Semi-supervised co-selection : instances and features : application to diagnosis of dry port by rail / Co-selection instances-variables en mode semi-supervisé : application au diagnostic de transport ferroviaire.

Makkhongkaew, Raywat 15 December 2016 (has links)
Depuis la prolifération des bases de données partiellement étiquetées, l'apprentissage automatique a connu un développement important dans le mode semi-supervisé. Cette tendance est due à la difficulté de l'étiquetage des données d'une part et au coût induit de cet étiquetage quand il est possible, d'autre part.L'apprentissage semi-supervisé consiste en général à modéliser une fonction statistique à partir de base de données regroupant à la fois des exemples étiquetés et d'autres non-étiquetés. Pour aborder une telle problématique, deux familles d'approches existent : celles basées sur la propagation de la supervision en vue de la classification supervisée et celles basées sur les contraintes en vue du clustering (non-supervisé). Nous nous intéressons ici à la deuxième famille avec une difficulté particulière. Il s'agit d'apprendre à partir de données avec une partie étiquetée relativement très réduite par rapport à la partie non-étiquetée.Dans cette thèse, nous nous intéressons à l'optimisation des bases de données statistiques en vue de l'amélioration des modèles d'apprentissage. Cette optimisation peut être horizontale et/ou verticale. La première définit la sélection d'instances et la deuxième définit la tâche de la sélection de variables.Les deux taches sont habituellement étudiées de manière indépendante avec une série de travaux considérable dans la littérature. Nous proposons ici de les étudier dans un cadre simultané, ce qui définit la thématique de la co-sélection. Pour ce faire, nous proposons deux cadres unifiés considérant à la fois la partie étiquetée des données et leur partie non-étiquetée. Le premier cadre est basé sur un clustering pondéré sous contraintes et le deuxième sur la préservation de similarités entre les données. Les deux approches consistent à qualifier les instances et les variables pour en sélectionner les plus pertinentes de manière simultanée.Enfin, nous présentons une série d'études empiriques sur des données publiques connues de la littérature pour valider les approches proposées et les comparer avec d'autres approches connues dans la littérature. De plus, une validation expérimentale est fournie sur un problème réel, concernant le diagnostic de transport ferroviaire de l'état de la Thaïlande / We are drowning in massive data but starved for knowledge retrieval. It is well known through the dimensionality tradeoff that more data increase informative but pay a price in computational complexity, which has to be made up in some way. When the labeled sample size is too little to bring sufficient information about the target concept, supervised learning fail with this serious challenge. Unsupervised learning can be an alternative in this problem. However, as these algorithms ignore label information, important hints from labeled data are left out and this will generally downgrades the performance of unsupervised learning algorithms. Using both labeled and unlabeled data is expected to better procedure in semi-supervised learning, which is more adapted for large domain applications when labels are hardly and costly to obtain. In addition, when data are large, feature selection and instance selection are two important dual operations for removing irrelevant information. Both of tasks with semisupervised learning are different challenges for machine learning and data mining communities for data dimensionality reduction and knowledge retrieval. In this thesis, we focus on co-selection of instances and features in the context of semi-supervised learning. In this context, co-selection becomes a more challenging problem as the data contains labeled and unlabeled examples sampled from the same population. To do such semi-supervised coselection, we propose two unified frameworks, which efficiently integrate labeled and unlabeled parts into the co-selection process. The first framework is based on weighting constrained clustering and the second one is based on similarity preserving selection. Both approaches evaluate the usefulness of features and instances in order to select the most relevant ones, simultaneously. Finally, we present a variety of empirical studies over high-dimensional data sets, which are well-known in the literature. The results are promising and prove the efficiency and effectiveness of the proposed approaches. In addition, the developed methods are validated on a real world application, over data provided by the State Railway of Thailand (SRT). The purpose is to propose the application models from our methodological contributions to diagnose the performance of rail dry port systems. First, we present the results of some ensemble methods applied on a first data set, which is fully labeled. Second, we show how can our co-selection approaches improve the performance of learning algorithms over partially labeled data provided by SRT
114

Uma abordagem para a escolha do melhor método de seleção de instâncias usando meta-aprendizagem

MOURA, Shayane de Oliveira 21 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-05T14:16:18Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Shayane_FINAL.pdf: 7778172 bytes, checksum: bef887b2265bc2ffe53c75c2c275d796 (MD5) / Made available in DSpace on 2016-04-05T14:16:18Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Shayane_FINAL.pdf: 7778172 bytes, checksum: bef887b2265bc2ffe53c75c2c275d796 (MD5) Previous issue date: 2015-08-21 / IF Sertão - PE / Os sistemas de Descoberta de Conhecimentos em Bases de Dados (mais conhecidos como sistemas KDD) e métodos de Aprendizagem de Máquinas preveem situações, agrupam e reconhecem padrões, entre outras tarefas que são demandas de um mundo no qual a maioria dos serviços está sendo oferecido por meio virtual. Apesar dessas aplicações se preocuparem em gerar informações de fácil interpretação, rápidas e confiáveis, as extensas bases de dados utilizadas dificultam o alcance de precisão unida a um baixo custo computacional. Para resolver esse problema, as bases de dados podem ser reduzidas com o objetivo de diminuir o tempo de processamento e facilitar o seu armazenamento, bem como, guardar apenas informações suficientes e relevantes para a extração do conhecimento. Nesse contexto, Métodos de Seleção de Instâncias (MSIs) têm sido propostos para reduzir e filtrar as bases de dados, selecionando ou criando novas instâncias que melhor as descrevam. Todavia, aqui se aplica o Teorema do No Free Lunch, ou seja, a performance dos MSIs varia conforme a base e nenhum dos métodos sempre sobrepõe seu desempenho aos demais. Por isso, esta dissertação propõe uma arquitetura para selecionar o “melhor” MSI para uma dada base de dados (mais adequado emrelação à precisão), chamadaMeta-CISM (Metalearning for Choosing Instance SelectionMethod). Estratégias de meta-aprendizagem são utilizadas para treinar um meta-classificador que aprende sobre o relacionamento entre a taxa de acerto de MSIs e a estrutura das bases. O Meta-CISM utiliza ainda reamostragem e métodos de seleção de atributos para melhorar o desempenho do meta-classificador. A proposta foi avaliada com os MSIs: C-pruner, DROP3, IB3, ICF e ENN-CNN. Os métodos de reamostragem utilizados foram: Bagging e Combination (método proposto neste trabalho). Foram utilizados como métodos de seleção de atributos: Relief-F, CFS, Chi Square Feature Evaluation e Consistency-Based Subset Evaluation. Cinco classificadores contribuíram para rotular as meta-instâncias: C4.5, PART, MLP-BP, SMO e KNN. Uma MLP-BP treinou o meta-classificador. Os experimentos foram realizados com dezesseis bases de dados públicas. O método proposto (Meta-CISM) foi melhor que todos os MSIs estudados, na maioria dos experimentos realizados. Visto que eficientemente seleciona um dos três melhores MSIs em mais de 85% dos casos, a abordagemé adequada para ser automaticamente utilizada na fase de pré-processamento das base de dados. / The systems for Knowledge Discovery in Databases (better known as KDD systems) andMachine Learning methods predict situations, recognize and group (cluster) patterns, among other tasks that are demands of a world in which the most of the services is being offered by virtual ways. Although these applications are concerned in generate fast, reliable and easy to interpret information, extensive databases used for such applications make difficult achieving accuracy with a low computational cost. To solve this problem, the databases can be reduced aiming to decrease the processing time and facilitating its storage, as well as, to save only sufficient and relevant information for the knowledge extraction. In this context, Instances SelectionMethods (ISMs) have been proposed to reduce and filter databases, selecting or creating new instances that best describe them. Nevertheless, No Free Lunch Theorem is applied, that is, the ISMs performance varies according to the base and none of the methods always overcomes their performance over others. Therefore, this work proposes an architecture to select the "best"ISM for a given database (best suited in relation to accuracy), called Meta-CISM (Meta-learning for Choosing Instance SelectionMethod). Meta-learning strategies are used to train a meta-classifier that learns about the relationship between the accuracy rate of ISMs and the bases structures. TheMeta-CISM still uses resampling and feature selection methods to improve the meta-classifier performance. The proposal was evaluated with the ISMs: C-pruner, DROP3, IB3, ICF and ENN-CNN. Resampling methods used were: Bagging and Combination (method proposed in this work). The Feature SelectionMethods used were: Relief-F, CFS, Chi Square Feature Evaluation e Consistency-Based Subset Evaluation. Five classifiers contributed to label the meta-instances: C4.5, PART, MLP-BP, SMO e KNN. The meta-classifier was trained by a MLP-BP. Experiments were carried with sixteen public databases. The proposed method (Meta-CISM) was better than all ISMs studied in the most of the experiments performed. Since that efficiently selects one of the three best ISMs in more than 85% of cases, the approach is suitable to be automatically used in the pre-processing of the databases.
115

Similaridade de algoritmos em cenários sensíveis a custo

MELO, Carlos Eduardo Castor de 27 August 2015 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-09-06T17:26:12Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação Mestrado- Carlos Eduardo Castor de Melo.pdf: 2325318 bytes, checksum: 1a456db1f76d03f35cc83b12a6026b6b (MD5) / Made available in DSpace on 2016-09-06T17:26:12Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação Mestrado- Carlos Eduardo Castor de Melo.pdf: 2325318 bytes, checksum: 1a456db1f76d03f35cc83b12a6026b6b (MD5) Previous issue date: 2015-08-27 / FACEPE / análise da similaridade entre algoritmos de aprendizagem de máquina é um importante aspecto na área de Meta-Aprendizado, onde informações obtidas a partir de processos de aprendizagem conhecidos podem ser utilizadas para guiar a seleção de algoritmos para tratar novos problemas apresentados. Essa similaridade é geralmente calculada através de métricas globais de desempenho, que omitem informações importantes para o melhor entendimento do comportamento dos algoritmos. Também existem abordagens onde é verificado o desempenho individualmente em cada instância do problema. Ambas as abordagens não consideram os custos associados a cada classe do problema, negligenciando informações que podem ser muito importantes em vários contextos de aprendizado. Nesse trabalho são apresentadas métricas para a avaliação do desempenho de algoritmos em cenários sensíveis a custo. Cada cenário é descrito a partir de um método para escolha de limiar para a construção de um classificador a partir de um modelo aprendido. Baseado nos valores de desempenho em cada instância, é proposta uma forma de avaliar a similaridade entre os algoritmos tanto em nível de problema como em nível global. Os experimentos realizados para ilustrar as métricas apresentadas neste trabalho foram realizados em um estudo de Meta-Aprendizado utilizando 19 algoritmos para a classificação das instâncias de 152 problemas. As medidas de similaridades foram utilizadas para a criação de agrupamentos hierárquicos. Os agrupamentos criados mostram como o comportamento entre os algoritmos diversifica de acordo com o cenário de custo a ser tratado. / The analysis of the similarity between machine learning algorithms is an important aspect of Meta-Learning, where knowledge gathered from known learning processes can be used to guide the selection of algorithms to tackle new learning problems presented. This similarity is usually calculated through global performance metrics that omit important information about the algorithm behavior. There are also approaches where the performance is verified individually on each instance of a problem. Both these approaches do not consider the costs associated with each problem class, hence they neglect information that can be very important in different learning contexts. In this study, metrics are presented to evaluate the performance of algorithms in cost sensitive scenarios. Each scenario is described by a threshold choice method, used to build a crisp classifier from a learned model. Based on the performance values for each problem instance, it is proposed a method to measure the similarity between the algorithms in a local level (for each problem) and in a global level (across all problems observed). The experiments used to illustrate the metrics presented in this paper were performed in a Meta-Learning study using 19 algorithms for the classification of the instances of 152 learning problems. The similarity measures were used to create hierarchical clusters. The clusters created show how the behavior of the algorithms diversifies according to the cost scenario to be treated.
116

PROBLEMATIKA RIADENIA LIKVIDITY FEDERÁLNEHO REZERVNÉHO SYSTÉMU V KONTEXTE BANKOVEJ KRÍZY 1929 - 1933 / LIQUIDITY MANAGEMENT PROBLEMS OF FED DURING BANKING PANIC 1929 - 1933

Titze, Miroslav January 2013 (has links)
Main goal of the diploma thesis is to research liquidity management problems of the Federal Reserve System during banking crisis 1929 -- 1933. Monetary policy implementation based on the implicit reserve targeting was not convenient in times of sharp expansion of the demand for reserves. FED was misled by Real-bills and Riefler-Burgess doctrine and considers monetary condition to be easy. Money interest rates responded very moderately to the shortage of the banking system's liquidity. We can find origin of the first quantitative easing in 1932 when FED first bought larger quantities of the government securities. Expansionary monetary policy during the banking crisis 1929 -- 1933 was also potentially limited by the conflict among U.S. financial stability and sustainability of the gold standard.
117

Algoritmos anytime baseados em instâncias para classificação em fluxo de dados / Instance-based anytime algorithm to data stream classification

Cristiano Inácio Lemes 09 March 2016 (has links)
Aprendizado em fluxo de dados é uma área de pesquisa importante e que vem crescendo nos últimos tempos. Em muitas aplicações reais os dados são gerados em uma sequência temporal potencialmente infinita. O processamento em fluxo possui como principal característica a necessidade por respostas que atendam restrições severas de tempo e memória. Por exemplo, um classificador aplicado a um fluxo de dados deve prover uma resposta a um determinado evento antes que o próximo evento ocorra. Caso isso não ocorra, alguns eventos do fluxo podem ficar sem classificação. Muitos fluxos geram eventos em uma taxa de chegada com grande variabilidade, ou seja, o intervalo de tempo de ocorrência entre dois eventos sucessivos pode variar muito. Para que um sistema de aprendizado obtenha sucesso na aquisição de conhecimento é preciso que ele apresente duas características principais: (i) ser capaz de prover uma classificação para um novo exemplo em tempo hábil e (ii) ser capaz de adaptar o modelo de classificação de maneira a tratar mudanças de conceito, uma vez que os dados podem não apresentar uma distribuição estacionária. Algoritmos de aprendizado de máquina em lote não possuem essas propriedades, pois assumem que as distribuições são estacionárias e não estão preparados para atender restrições de memória e processamento. Para atender essas necessidades, esses algoritmos devem ser adaptados ao contexto de fluxo de dados. Uma possível adaptação é tornar o algoritmo de classificação anytime. Algoritmos anytime são capazes de serem interrompidos e prover uma resposta (classificação) aproximada a qualquer instante. Outra adaptação é tornar o algoritmo incremental, de maneira que seu modelo possa ser atualizado para novos exemplos do fluxo de dados. Neste trabalho é realizada a investigação de dois métodos capazes de realizar o aprendizado em um fluxo de dados. O primeiro é baseado no algoritmo k-vizinhos mais próximo anytime estado-da-arte, onde foi proposto um novo método de desempate para ser utilizado neste algoritmo. Os experimentos mostraram uma melhora consistente no desempenho deste algoritmo em várias bases de dados de benchmark. O segundo método proposto possui as características dos algoritmos anytime e é capaz de tratar a mudança de conceito nos dados. Este método foi chamado de Algoritmo Anytime Incremental e possui duas versões, uma baseado no algoritmo Space Saving e outra em uma Janela Deslizante. Os experimentos mostraram que em cada fluxo cada versão deste método proposto possui suas vantagens e desvantagens. Mas no geral, comparado com outros métodos baselines, ambas as versões apresentaram melhor desempenho. / Data stream learning is a very important research field that has received much attention from the scientific community. In many real-world applications, data is generated as potentially infinite temporal sequences. The main characteristic of stream processing is to provide answers observing stringent restrictions of time and memory. For example, a data stream classifier must provide an answer for each event before the next one arrives. If this does not occur, some events from the data stream may be left unclassified. Many streams generate events with highly variable output rate, i.e. the time interval between two consecutive events may vary greatly. For a learning system to be successful, two properties must be satisfied: (i) it must be able to provide a classification for a new example in a short time and (ii) it must be able to adapt the classification model to treat concept change, since the data may not follow a stationary distribution. Batch machine learning algorithms do not satisfy those properties because they assume that the distribution is stationary and they are not prepared to operate with severe memory and processing constraints. To satisfy these requirements, these algorithms must be adapted to the data stream context. One possible adaptation is to turn the algorithm into an anytime classifier. Anytime algorithms may be interrupted and still provide an approximated answer (classification) at any time. Another adaptation is to turn the algorithm into an incremental classifier so that its model may be updated with new examples from the data stream. In this work, it is performed an evaluation of two approaches for data stream learning. The first one is based on a state-of-the-art k-nearest neighbor anytime classifier. A new tiebreak approach is proposed to be used with this algorithm. Experiments show consistently better results in the performance of this algorithm in many benchmark data sets. The second proposed approach is to adapt the anytime algorithm for concept change. This approach was called Incremental Anytime Algorithm, and it was designed with two versions. One version is based on the Space Saving algorithm and the other is based in a Sliding Window. Experiments show that both versions are significantly better than baseline approaches.
118

Identifikace abnormálních EKG segmentů pomocí metody Multiple-Instance Learning / Identification of Abnormal ECG Segments Using Multiple-Instance Learning

Šťávová, Karolína January 2021 (has links)
Heart arrhythmias are a very common heart disease whose incidence is rising. This thesis is focused on the detection of premature ventricular contractions from 12-lead ECG records by means of deep learning. The location of these arrhythmias (key instances) in the record was found using a technique based on Multiple-Instance Learning. In the theoretical part of the thesis, basic electrophysiology of the heart and deep learning with a focus on the convolutional neural networks are described. Afterward, a program was created using the Python programming language, which contains a model based on the InceptionTime architecture, using which classification of the signals into the selected classes was performed. Grad-CAM was implemented to find locations of the key instances in the ECGs. The evaluation of the arrhythmia detection quality was done using the F1 score and the results were discussed at the end of the thesis.
119

Zertifizierende verteilte Algorithmen

Völlinger, Kim 22 October 2020 (has links)
Eine Herausforderung der Softwareentwicklung ist, die Korrektheit einer Software sicherzustellen. Testen bietet es keine mathematische Korrektheit. Formale Verifikation ist jedoch oft zu aufwändig. Laufzeitverifikation steht zwischen den beiden Methoden. Laufzeitverifikation beantwortet die Frage, ob ein Eingabe-Ausgabe-Paar korrekt ist. Ein zertifizierender Algorithmus überzeugt seinen Nutzer durch ein Korrektheitsargument zur Laufzeit. Dafür berechnet ein zertifizierender Algorithmus für eine Eingabe zusätzlich zur Ausgabe noch einen Zeugen – ein Korrektheitsargument. Jeder zertifizierende Algorithmus besitzt ein Zeugenprädikat: Ist dieses erfüllt für eine Eingabe, eine Ausgabe und einen Zeugen, so ist das Eingabe-Ausgabe-Paar korrekt. Ein simpler Algorithmus, der das Zeugenprädikat für den Nutzer entscheidet, ist ein Checker. Die Korrektheit des Checkers ist folglich notwendig für den Ansatz und die formale Instanzverifikation, bei der wir Checker verifizieren und einen maschinen-geprüften Beweis für die Korrektheit eines Eingabe-Ausgabe-Paars zur Laufzeit gewinnen. Zertifizierende sequentielle Algorithmen sind gut untersucht. Verteilte Algorithmen, die auf verteilten Systemen laufen, unterscheiden sich grundlegend von sequentiellen Algorithmen: die Ausgabe ist über das System verteilt oder der Algorithmus läuft fortwährend. Wir untersuchen zertifizierende verteilte Algorithmen. Unsere Forschungsfrage ist: Wie können wir das Konzept zertifizierender sequentieller Algorithmen so auf verteilte Algorithmen übertragen, dass wir einerseits nah am ursprünglichen Konzept bleiben und andererseits die Gegebenheiten verteilter Systeme berücksichtigen? Wir stellen eine Methode der Übertragung vor. Die beiden Ziele abwägend entwickeln wir eine Klasse zertifizierender verteilter Algorithmen, die verteilte Zeugen berechnen und verteilte Checker besitzen. Wir präsentieren Fallstudien, Entwurfsmuster und ein Framework zur formalen Instanzverifikation. / A major problem in software engineering is to ensure the correctness of software. Testing offers no mathematical correctness. Formal verification is often too costly. Runtime verification stands between the two methods. Runtime verification answers the question whether an input-output pair is correct. A certifying algorithm convinces its user at runtime by offering a correctness argument. For each input, a certifying algorithm computes an output and additionally a witness. Each certifying algorithm has a witness predicate – a predicate with the property: being satisfied for an input, output and witness implies the input-output pair is correct. A simple algorithm deciding the witness predicate for the user is a checker. Hence, the checker’s correctness is crucial to the approach and motivates formal instance verification where we verify checkers and obtain machine-checked proofs for the correctness of an input-output pair at runtime. Certifying sequential algorithms are well-established. Distributed algorithms, designed to run on distributed systems, behave fundamentally different from sequential algorithms: their output is distributed over the system or they even run continuously. We investigate certifying distributed algorithms. Our research question is: How can we transfer the concept of certifying sequential algorithms to distributed algorithms such that we are in line with the original concept but also adapt to the conditions of distributed systems? In this thesis, we present a method to transfer the concept: Weighing up both sometimes conflicting goals, we develop a class of certifying distributed algorithms that compute distributed witnesses and have distributed checkers. We offer case studies, design patterns and a framework for formal instance verification. Additionally, we investigate other methods to transfer the concept of certifying algorithms to distributed algorithms.
120

3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations

Konradsson, Albin, Bohman, Gustav January 2021 (has links)
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.

Page generated in 0.0614 seconds