• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 31
  • 31
  • 8
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Engineering-orientierte Steuerungsarchitektur auf der Basis von Aktionsprimitiven für Anwendungen in der Robotik

Hennig, Matthias 27 August 2012 (has links)
In der vorliegenden Arbeit wird die flexible Steuerungsarchitektur Apeca für Systeme der Robotik sowie der robotergestützten Fertigungstechnik vorgestellt. Dafür werden verschiedene Anforderungen identifiziert und innerhalb eines Entwurfs vereint. Ein Hauptaugenmerk des dabei entstandenen Konzeptes ist es, einen vereinfachten Engineeringprozess für den Steuerungsentwurf zu ermöglichen. Dieser Ansatz wird durch die Verwendung von Aktionsprimitiven ermöglicht, die in Form atomarer Systemverhalten in einer speziellen Modulhierarchie eingesetzt werden. Hierzu erfolgt innerhalb der Steuerungsarchitektur eine Trennung zwischen einem funktionsorientierten verhaltensbasierten Modell zur hierarchischen sowie funktionell parallelen Ausführung von Aktionsprimitiven und einem ablauforientierten Modell zur aufgabenabhängigen Aktivierung derselben. Mit Hilfe eines Nutzerkonzepts werden diese Modelle verschiedenen Anwendern zugeordnet. Die objektorientierte Realisierung dieses Entwurfs ermöglicht die Verwendung und Synchronisation von mehreren Teilsystemen innerhalb einer Steuerung. In der Arbeit wird sowohl der entstandene Entwurf diskutiert als auch eine prototypische Implementierung vorgestellt. Abschließend werden die Ergebnisse anhand verschiedener Demonstrationsszenarien präsentiert. / In this present work, the Apeca framework, a flexible control architecture for robotic systems, is introduced. The conceptual design combines different requirements identified in miscellaneous robotic control approaches. The main focus of the resulting concept is on a simplified engineering process for the controller design. This approach is supported by the use of atomic system behaviors, the so called action primitives, in a special module hierarchy. For this purpose a distinction between a functional behavior based system model with hierarchically and also parallelly executed action primitives and a sequential control system model with a task-dependent activation of the primitives is proposed. These models are assigned to different users through a distinct user concept. An object-oriented implementation of the proposed architecture allows the utilization and synchronisation of multiple (sub-)systems within one framework. In this work the proposed framework will be discussed, a prototypical implementation will be presented and results based on different experimental scenarios will be shown.
12

A Generic Approach to Component-Level Evaluation in Information Retrieval

Kürsten, Jens 19 November 2012 (has links) (PDF)
Research in information retrieval deals with the theories and models that constitute the foundations for any kind of service that provides access or pointers to particular elements of a collection of documents in response to a submitted information need. The specific field of information retrieval evaluation is concerned with the critical assessment of the quality of search systems. Empirical evaluation based on the Cranfield paradigm using a specific collection of test queries in combination with relevance assessments in a laboratory environment is the classic approach to compare the impact of retrieval systems and their underlying models on retrieval effectiveness. In the past two decades international campaigns, like the Text Retrieval Conference, have led to huge advances in the design of experimental information retrieval evaluations. But in general the focus of this system-driven paradigm remained on the comparison of system results, i.e. retrieval systems are treated as black boxes. This approach to the evaluation of retrieval system has been criticised for treating systems as black boxes. Recent works on this subject have proposed the study of the system configurations and their individual components. This thesis proposes a generic approach to the evaluation of retrieval systems at the component-level. The focus of the thesis at hand is on the key components that are needed to address typical ad-hoc search tasks, like finding books on a particular topic in a large set of library records. A central approach in this work is the further development of the Xtrieval framework by the integration of widely-used IR toolkits in order to eliminate the limitations of individual tools. Strong empirical results at international campaigns that provided various types of evaluation tasks confirm both the validity of this approach and the flexibility of the Xtrieval framework. Modern information retrieval systems contain various components that are important for solving particular subtasks of the retrieval process. This thesis illustrates the detailed analysis of important system components needed to address ad-hoc retrieval tasks. Here, the design and implementation of the Xtrieval framework offers a variety of approaches for flexible system configurations. Xtrieval has been designed as an open system and allows the integration of further components and tools as well as addressing search tasks other than ad-hoc retrieval. This approach ensures that it is possible to conduct automated component-level evaluation of retrieval approaches. Both the scale and impact of these possibilities for the evaluation of retrieval systems are demonstrated by the design of an empirical experiment that covers more than 13,000 individual system configurations. This experimental set-up is tested on four test collections for ad-hoc search. The results of this experiment are manifold. For instance, particular implementations of ranking models fail systematically on all tested collections. The exploratory analysis of the ranking models empirically confirms the relationships between different implementations of models that share theoretical foundations. The obtained results also suggest that the impact on retrieval effectiveness of most instances of IR system components depends on the test collections that are being used for evaluation. Due to the scale of the designed component-level evaluation experiment, not all possible interactions of the system component under examination could be analysed in this work. For this reason the resulting data set will be made publicly available to the entire research community. / Das Forschungsgebiet Information Retrieval befasst sich mit Theorien und Modellen, die die Grundlage für jegliche Dienste bilden, die als Antwort auf ein formuliertes Informationsbedürfnis den Zugang zu oder einen Verweis auf entsprechende Elemente einer Dokumentsammlung ermöglichen. Die Qualität von Suchalgorithmen wird im Teilgebiet Information Retrieval Evaluation untersucht. Der klassische Ansatz für den empirischen Vergleich von Retrievalsystemen basiert auf dem Cranfield-Paradigma und nutzt einen spezifischen Korpus mit einer Menge von Beispielanfragen mit zugehörigen Relevanzbewertungen. Internationale Evaluationskampagnen, wie die Text Retrieval Conference, haben in den vergangenen zwei Jahrzehnten zu großen Fortschritten in der Methodik der empirischen Bewertung von Suchverfahren geführt. Der generelle Fokus dieses systembasierten Ansatzes liegt jedoch nach wie vor auf dem Vergleich der Gesamtsysteme, dass heißt die Systeme werden als Black Box betrachtet. In jüngster Zeit ist diese Evaluationsmethode vor allem aufgrund des Black-Box-Charakters des Untersuchungsgegenstandes in die Kritik geraten. Aktuelle Arbeiten fordern einen differenzierteren Blick in die einzelnen Systemeigenschaften, bzw. ihrer Komponenten. In der vorliegenden Arbeit wird ein generischer Ansatz zur komponentenbasierten Evaluation von Retrievalsystemen vorgestellt und empirisch untersucht. Der Fokus der vorliegenden Dissertation liegt deshalb auf zentralen Komponenten, die für die Bearbeitung klassischer Ad-Hoc Suchprobleme, wie dem Finden von Büchern zu einem bestimmten Thema in einer Menge von Bibliothekseinträgen, wichtig sind. Ein zentraler Ansatz der Arbeit ist die Weiterentwicklung des Xtrieval Frameworks mittels der Integration weitverbreiteter Retrievalsysteme mit dem Ziel der gegenseitigen Eliminierung systemspezifischer Schwächen. Herausragende Ergebnisse im internationalen Vergleich, für verschiedenste Suchprobleme, verdeutlichen sowohl das Potenzial des Ansatzes als auch die Flexibilität des Xtrieval Frameworks. Moderne Retrievalsysteme beinhalten zahlreiche Komponenten, die für die Lösung spezifischer Teilaufgaben im gesamten Retrievalprozess wichtig sind. Die hier vorgelegte Arbeit ermöglicht die genaue Betrachtung der einzelnen Komponenten des Ad-hoc Retrievals. Hierfür wird mit Xtrieval ein Framework dargestellt, welches ein breites Spektrum an Verfahren flexibel miteinander kombinieren lässt. Das System ist offen konzipiert und ermöglicht die Integration weiterer Verfahren sowie die Bearbeitung weiterer Retrievalaufgaben jenseits des Ad-hoc Retrieval. Damit wird die bislang in der Forschung verschiedentlich geforderte aber bislang nicht erfolgreich umgesetzte komponentenbasierte Evaluation von Retrievalverfahren ermöglicht. Mächtigkeit und Bedeutung dieser Evaluationsmöglichkeiten werden anhand ausgewählter Instanzen der Komponenten in einer empirischen Analyse mit über 13.000 Systemkonfigurationen gezeigt. Die Ergebnisse auf den vier untersuchten Ad-Hoc Testkollektionen sind vielfältig. So wurden beispielsweise systematische Fehler bestimmter Ranking-Modelle identifiziert und die theoretischen Zusammenhänge zwischen spezifischen Klassen dieser Modelle anhand empirischer Ergebnisse nachgewiesen. Der Maßstab des durchgeführten Experiments macht eine Analyse aller möglichen Einflüsse und Zusammenhänge zwischen den untersuchten Komponenten unmöglich. Daher werden die erzeugten empirischen Daten für weitere Studien öffentlich bereitgestellt.
13

[en] WORKFLOW FOR BIOINFORMATICS / [pt] WORKFLOW PARA BIOINFORMÁTICA

MELISSA LEMOS 11 February 2005 (has links)
[pt] Os projetos para estudo de genomas partem de uma fase de sequenciamento onde são gerados em laboratório dados brutos, ou seja, sequências de DNA sem significado biológico. As sequências de DNA possuem códigos responsáveis pela produção de proteínas e RNAs, enquanto que as proteínas participam de todos os fenômenos biológicos, como a replicação celular, produção de energia, defesa imunológica, contração muscular, atividade neurológica e reprodução. As sequências de DNA, RNA e proteínas são chamadas nesta tese de biossequências. Porém, o grande desafio destes projetos consiste em analisar essas biossequências, e obter informações biologicamente relevantes. Durante a fase de análise, os pesquisadores usam diversas ferramentas, programas de computador, e um grande volume de informações armazenadas em fontes de dados de Biologia Molecular. O crescente volume e a distribuição das fontes de dados e a implementação de novos processos em Bioinformática facilitaram enormemente a fase de análise, porém criaram uma demanda por ferramentas e sistemas semi-automáticos para lidar com tal volume e complexidade. Neste cenário, esta tese aborda o uso de workflows para compor processos de Bioinformática, facilitando a fase de análise. Inicialmente apresenta uma ontologia modelando processos e dados comumente utilizados em Bioinformática. Esta ontologia foi derivada de um estudo cuidadoso, resumido na tese, das principais tarefas feitas pelos pesquisadores em Bioinformática. Em seguida, a tese propõe um framework para um sistema de gerência de análises em biossequências, composto por dois sub-sistemas. O primeiro é um sistema de gerência de workflows de Bioinformática, que auxilia os pesquisadores na definição, validação, otimização e execução de workflows necessários para se realizar as análises. O segundo é um sistema de gerência de dados em Bioinformática, que trata do armazenamento e da manipulação dos dados envolvidos nestas análises. O framework inclui um gerente de ontologias, armazenando ontologias para Bioinformática, nos moldes da apresentada anteriormente. Por fim, a tese descreve instanciações do framework para três tipos de ambiente de trabalho comumente encontrados e sugestivamente chamados de ambiente pessoal, ambiente de laboratório e ambiente de comunidade. Para cada um destes ambientes, a tese discute em detalhe os aspectos particulares da execução e otimização de workflows. / [en] Genome projects usually start with a sequencing phase, where experimental data, usually DNA sequences, is generated, without any biological interpretation. DNA sequences have codes which are responsible for the production of protein and RNA sequences, while protein sequences participate in all biological phenomena, such as cell replication, energy production, immunological defense, muscular contraction, neurological activity and reproduction. DNA, RNA and protein sequences are called biosequences in this thesis. The fundamental challenge researchers face lies exactly in analyzing these sequences to derive information that is biologically relevant. During the analysis phase, researchers use a variety of analysis programs and access large data sources holding Molecular Biology data. The growing number of Bioinformatics data sources and analysis programs indeed enormously facilitated the analysis phase. However, it creates a demand for systems that facilitate using such computational resources. Given this scenario, this thesis addresses the use of workflows to compose Bioinformatics analysis programs that access data sources, thereby facilitating the analysis phase. An ontology modeling the analysis program and data sources commonly used in Bioinformatics is first described. This ontology is derived from a careful study, also summarized in the thesis, of the computational resources researchers in Bioinformatics presently use. A framework for biosequence analysis management systems is next described. The system is divided into two major components. The first component is a Bioinformatics workflow management system that helps researchers define, validate, optimize and run workflows combining Bioinformatics analysis programs. The second component is a Bioinformatics data management system that helps researchers manage large volumes of Bioinformatics data. The framework includes an ontology manager that stores Bioinformatics ontologies, such as that previously described. Lastly, instantiations for the Bioinformatics workflow management system framework are described. The instantiations cover three types of working environments commonly found and suggestively called personal environment, laboratory environment and community environment. For each of these instantiations, aspects related to workflow optimization and execution are carefully discussed.
14

Open instruments : framework para desenvolvimento e performance de instrumentos musicais digitais em MaxMSP

Ângelo, Tiago Alexandre da Silva January 2012 (has links)
Tese de mestrado. Mestrado em Multimédia. Faculdade de Engenharia. Universidade do Porto. 2012
15

A software framework to support distributed command and control applications

Duvenhage, Arno 09 August 2011 (has links)
This dissertation discusses a software application development framework. The framework supports developing software applications within the context of Joint Command and Control, which includes interoperability with network-centric systems as well as interoperability with existing legacy systems. The next generation of Command and Control systems are expected to be built on common architectures or enterprise middleware. Enterprise middleware does however not directly address integration with legacy Command and Control systems nor does it address integration with existing and future tactical systems like fighter aircraft. The software framework discussed in this dissertation enables existing legacy systems and tactical systems to interoperate with each other; it enables interoperability with the Command and Control enterprise; and it also enables simulated systems to be deployed within a real environment. The framework does all of this through a unique distributed architecture. The architecture supports both system interoperability and the simulation of systems and equipment within the context of Command and Control. This hybrid approach is the key to the success of the framework. There is a strong focus on the quality of the framework and the current implementation has already been successfully applied within the Command and Control environment. The current framework implementation is also supplied on a DVD with this dissertation. / Dissertation (MEng)--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
16

Extended travelling fire method framework with an OpenSees-based integrated tool SIFBuilder

Dai, Xu January 2018 (has links)
Many studies of the fire induced thermal and structural behaviour in large compartments, carried out over the past two decades, show a great deal of non-uniformity, unlike the homogeneous compartment temperature assumption in the current fire safety engineering practice. Furthermore, some large compartment fires may burn locally and they tend to move across entire floor plates over a period of time as the fuel is consumed. This kind of fire scenario is beginning to be idealized as 'travelling fires' in the context of performance‐based structural and fire safety engineering. However, the previous research of travelling fires still relies on highly simplified travelling fire models (i.e. Clifton's model and Rein's model); and no equivalent numerical tools can perform such simulations, which involves analysis of realistic fire, heat transfer and thermo-mechanical response in one single software package with an automatic coupled manner. Both of these hinder the advance of the research on performance‐based structural fire engineering. The author develops an extended travelling fire method (ETFM) framework and an integrated comprehensive tool with high computational expediency in this research, to address the above‐mentioned issues. The experiments conducted for characterizing travelling fires over the past two decades are reviewed, in conjunction with the current available travelling fire models. It is found that no performed travelling fire experiment records both the structural response and the mass loss rate of the fuel (to estimate the fire heat release rate) in a single test, which further implies closer collaboration between the structural and the fire engineers' teams are needed, especially for the travelling fire research topic. In addition, an overview of the development of OpenSees software framework for modelling structures in fire is presented, addressing its theoretical background, fundamental assumptions, and inherent limitations. After a decade of development, OpenSees has modules including fire, heat transfer, and thermo‐mechanical analysis. Meanwhile, it is one of the few structural fire modelling software which is open source and free to the entire community, allowing interested researchers to use and contribute with no expense. An OpenSees‐based integrated tool called SIFBuilder is developed by the author and co‐workers, which can perform fire modelling, heat transfer analysis, and thermo-mechanical analysis in one single software with an automatic coupled manner. This manner would facilitate structural engineers to apply fire loading on their design structures like other mechanical loading types (e.g. seismic loading, gravity loading, etc.), without transferring the fire and heat transfer modelling results to each structural element manually and further assemble them to the entire structure. This feature would largely free the structural engineers' efforts to focus on the structural response for performance-based design under different fire scenarios, without investigating the modelling details of fire and heat transfer analysis. Moreover, the efficiency due to this automatic coupled manner would become more superior, for modelling larger structures under more realistic fire scenarios (e.g. travelling fires). This advantage has been confirmed by the studies carried out in this research, including 29 travelling fire scenarios containing total number of 696 heat transfer analysis for the structural members, which were undertaken at very modest computational costs. In addition, a set of benchmark problems for verification and validation of OpenSees/SIFBuilder are investigated, which demonstrates good agreement against analytical solutions, ABAQUS, SAFIR, and the experimental data. These benchmark problems can also be used for interested researchers to verify their own numerical or analytical models for other purposes, and can be also used as an induction guide of OpenSees/SIFBuilder. Significantly, an extended travelling fire method (ETFM) framework is put forward in this research, which can predict the fire severity considering a travelling fire concept with an upper bound. This framework considers the energy and mass conservation, rather than simply forcing other independent models to 'travel' in the compartment (i.e. modified parametric fire curves in Clifton's model, 800°C‐1200°C temperature block and the Alpert's ceiling jet in Rein's model). It is developed based on combining Hasemi's localized fire model for the fire plume, and a simple smoke layer calculation by utilising the FIRM zone model for the areas of the compartment away from the fire. Different from mainly investigating the thermal impact due to various ratios of the fire size to the compartment size (e.g. 5%, 10%, 25%, 75%, etc.), as in Rein's model, this research investigates the travelling fire thermal impact through explicit representation of the various fire spread rates and fuel load densities, which are the key input parameters in the ETFM framework. To represent the far field thermal exposures, two zone models (i.e. ASET zone model & FIRM zone model) and the ETFM framework are implemented in SIFBuilder, in order to provide the community a 'vehicle' to try, test, and further improve this ETFM framework, and also the SIFBuilder itself. It is found that for 'slow' travelling fires (i.e. low fire spread rates), the near‐field fire plume brings more dominant thermal impact compared with the impact from far‐field smoke. In contrast, for 'fast' travelling fires (i.e. high fire spread rates), the far‐field smoke brings more dominant thermal impact. Furthermore, the through depth thermal gradients due to different travelling fire scenarios were explored, especially with regards to the 'thermal gradient reversal' due to the near‐field fire plume approaching and leaving the design structural member. This 'thermal gradient reversal' would fundamentally reverse the thermally‐induced bending moment from hogging to sagging. The modelling results suggest that the peak thermal gradient due to near‐field approaching is more sensitive to the fuel load density than fire spread rate, where larger peak values are captured with lower fuel load densities. Moreover, the reverse peak thermal gradient due to near‐field leaving is also sensitive to the fuel load density rather than the fire spread rate, but this reverse peak value is inversely proportional to the fuel load densities. Finally, the key assumptions of the ETFM framework are rationalised and its limitations are emphasized. Design instructions with relevant information which can be readily used by the structural fire engineers for the ETFM framework are also included. Hence more optimised and robust structural design under such fire threat can be generated and guaranteed, where we believe these efforts will advance the performance‐based structural and fire safety engineering.
17

Performance Analysis of TCAMs in Switches

Tawakol, Abdel Maguid 25 April 2012 (has links)
The Catalyst 6500 is a modern commercial switch, capable of processing millions of packets per second through the utilization of specialized hardware. One of the main hardware components aiding the switch in performing its task is the Ternary Content Addressable Memory (TCAM). TCAMs update themselves with data relevant to routing and switching based on the traffic flowing through the switch. This enables the switch to forward future packets destined to a location that has already been previously discovered - at a very high speed. The problem is TCAMs have a limited size, and once they reach their capacity, the switch has to rely on software to perform the switching and routing - a much slower process than performing Hardware Switching that utilizes the TCAM. A framework has been developed to analyze the switch’s performance once the TCAM has reached its capacity, as well as measure the penalty associated with a cache miss. This thesis concludes with some recommendations and future work.
18

Performance Analysis of TCAMs in Switches

Tawakol, Abdel Maguid 25 April 2012 (has links)
The Catalyst 6500 is a modern commercial switch, capable of processing millions of packets per second through the utilization of specialized hardware. One of the main hardware components aiding the switch in performing its task is the Ternary Content Addressable Memory (TCAM). TCAMs update themselves with data relevant to routing and switching based on the traffic flowing through the switch. This enables the switch to forward future packets destined to a location that has already been previously discovered - at a very high speed. The problem is TCAMs have a limited size, and once they reach their capacity, the switch has to rely on software to perform the switching and routing - a much slower process than performing Hardware Switching that utilizes the TCAM. A framework has been developed to analyze the switch’s performance once the TCAM has reached its capacity, as well as measure the penalty associated with a cache miss. This thesis concludes with some recommendations and future work.
19

A Generic Approach to Component-Level Evaluation in Information Retrieval

Kürsten, Jens 19 November 2012 (has links)
Research in information retrieval deals with the theories and models that constitute the foundations for any kind of service that provides access or pointers to particular elements of a collection of documents in response to a submitted information need. The specific field of information retrieval evaluation is concerned with the critical assessment of the quality of search systems. Empirical evaluation based on the Cranfield paradigm using a specific collection of test queries in combination with relevance assessments in a laboratory environment is the classic approach to compare the impact of retrieval systems and their underlying models on retrieval effectiveness. In the past two decades international campaigns, like the Text Retrieval Conference, have led to huge advances in the design of experimental information retrieval evaluations. But in general the focus of this system-driven paradigm remained on the comparison of system results, i.e. retrieval systems are treated as black boxes. This approach to the evaluation of retrieval system has been criticised for treating systems as black boxes. Recent works on this subject have proposed the study of the system configurations and their individual components. This thesis proposes a generic approach to the evaluation of retrieval systems at the component-level. The focus of the thesis at hand is on the key components that are needed to address typical ad-hoc search tasks, like finding books on a particular topic in a large set of library records. A central approach in this work is the further development of the Xtrieval framework by the integration of widely-used IR toolkits in order to eliminate the limitations of individual tools. Strong empirical results at international campaigns that provided various types of evaluation tasks confirm both the validity of this approach and the flexibility of the Xtrieval framework. Modern information retrieval systems contain various components that are important for solving particular subtasks of the retrieval process. This thesis illustrates the detailed analysis of important system components needed to address ad-hoc retrieval tasks. Here, the design and implementation of the Xtrieval framework offers a variety of approaches for flexible system configurations. Xtrieval has been designed as an open system and allows the integration of further components and tools as well as addressing search tasks other than ad-hoc retrieval. This approach ensures that it is possible to conduct automated component-level evaluation of retrieval approaches. Both the scale and impact of these possibilities for the evaluation of retrieval systems are demonstrated by the design of an empirical experiment that covers more than 13,000 individual system configurations. This experimental set-up is tested on four test collections for ad-hoc search. The results of this experiment are manifold. For instance, particular implementations of ranking models fail systematically on all tested collections. The exploratory analysis of the ranking models empirically confirms the relationships between different implementations of models that share theoretical foundations. The obtained results also suggest that the impact on retrieval effectiveness of most instances of IR system components depends on the test collections that are being used for evaluation. Due to the scale of the designed component-level evaluation experiment, not all possible interactions of the system component under examination could be analysed in this work. For this reason the resulting data set will be made publicly available to the entire research community. / Das Forschungsgebiet Information Retrieval befasst sich mit Theorien und Modellen, die die Grundlage für jegliche Dienste bilden, die als Antwort auf ein formuliertes Informationsbedürfnis den Zugang zu oder einen Verweis auf entsprechende Elemente einer Dokumentsammlung ermöglichen. Die Qualität von Suchalgorithmen wird im Teilgebiet Information Retrieval Evaluation untersucht. Der klassische Ansatz für den empirischen Vergleich von Retrievalsystemen basiert auf dem Cranfield-Paradigma und nutzt einen spezifischen Korpus mit einer Menge von Beispielanfragen mit zugehörigen Relevanzbewertungen. Internationale Evaluationskampagnen, wie die Text Retrieval Conference, haben in den vergangenen zwei Jahrzehnten zu großen Fortschritten in der Methodik der empirischen Bewertung von Suchverfahren geführt. Der generelle Fokus dieses systembasierten Ansatzes liegt jedoch nach wie vor auf dem Vergleich der Gesamtsysteme, dass heißt die Systeme werden als Black Box betrachtet. In jüngster Zeit ist diese Evaluationsmethode vor allem aufgrund des Black-Box-Charakters des Untersuchungsgegenstandes in die Kritik geraten. Aktuelle Arbeiten fordern einen differenzierteren Blick in die einzelnen Systemeigenschaften, bzw. ihrer Komponenten. In der vorliegenden Arbeit wird ein generischer Ansatz zur komponentenbasierten Evaluation von Retrievalsystemen vorgestellt und empirisch untersucht. Der Fokus der vorliegenden Dissertation liegt deshalb auf zentralen Komponenten, die für die Bearbeitung klassischer Ad-Hoc Suchprobleme, wie dem Finden von Büchern zu einem bestimmten Thema in einer Menge von Bibliothekseinträgen, wichtig sind. Ein zentraler Ansatz der Arbeit ist die Weiterentwicklung des Xtrieval Frameworks mittels der Integration weitverbreiteter Retrievalsysteme mit dem Ziel der gegenseitigen Eliminierung systemspezifischer Schwächen. Herausragende Ergebnisse im internationalen Vergleich, für verschiedenste Suchprobleme, verdeutlichen sowohl das Potenzial des Ansatzes als auch die Flexibilität des Xtrieval Frameworks. Moderne Retrievalsysteme beinhalten zahlreiche Komponenten, die für die Lösung spezifischer Teilaufgaben im gesamten Retrievalprozess wichtig sind. Die hier vorgelegte Arbeit ermöglicht die genaue Betrachtung der einzelnen Komponenten des Ad-hoc Retrievals. Hierfür wird mit Xtrieval ein Framework dargestellt, welches ein breites Spektrum an Verfahren flexibel miteinander kombinieren lässt. Das System ist offen konzipiert und ermöglicht die Integration weiterer Verfahren sowie die Bearbeitung weiterer Retrievalaufgaben jenseits des Ad-hoc Retrieval. Damit wird die bislang in der Forschung verschiedentlich geforderte aber bislang nicht erfolgreich umgesetzte komponentenbasierte Evaluation von Retrievalverfahren ermöglicht. Mächtigkeit und Bedeutung dieser Evaluationsmöglichkeiten werden anhand ausgewählter Instanzen der Komponenten in einer empirischen Analyse mit über 13.000 Systemkonfigurationen gezeigt. Die Ergebnisse auf den vier untersuchten Ad-Hoc Testkollektionen sind vielfältig. So wurden beispielsweise systematische Fehler bestimmter Ranking-Modelle identifiziert und die theoretischen Zusammenhänge zwischen spezifischen Klassen dieser Modelle anhand empirischer Ergebnisse nachgewiesen. Der Maßstab des durchgeführten Experiments macht eine Analyse aller möglichen Einflüsse und Zusammenhänge zwischen den untersuchten Komponenten unmöglich. Daher werden die erzeugten empirischen Daten für weitere Studien öffentlich bereitgestellt.
20

Development of a Flexible Software Framework for Biosignal PI : An Open-Source Biosignal Acquisition and Processing System / Utveckling av ett Flexibelt Mjukvaruramverk for Biosignal PI : ett system för insamling och bearbetning av biomedicinska signaler med öppen källkod

Röstin, Martin January 2016 (has links)
As the world population ages, the healthcare system is facing new challenges in treating more patients at a lower cost than today. One trend in addressing this problem is to increase the opportunities of in-home care. To achieve this there is a need for safe and cost-effective monitoring systems. Biosignal PI is an ongoing open-source project created to develop a flexible and affordable platform for development of stand-alone devices able to measure and process physiological signals. This master thesis project, performed at the department of Medical Sensors, Signals and System at the School of Technology and Health, aimed at further develop the Biosignal PI software by constructing a new flexible software framework architecture that could be used for measurement and processing of different types of biosignals. The project also aimed at implementing features for Heart Rate Variability(HRV) Analysis in the Biosignal PI software as well as developing a graphical user interface(GUI) for the Raspberry PI hardware module PiFace Control and Display. The project developed a new flexible abstract software framework for the Biosignal PI. The new framework was constructed to abstract all hardware specifics into smaller interchangeable modules, with the idea of the modules being independent in handling their specific task making it possible to make changes in the Biosignal PI software without having to rewrite all of the core. The new developed Biosignal PI software framework was implemented into the existing hardware setup consisting of an Raspberry PI, a small and affordable single-board computer, connected to ADAS1000, a low power analog front end capable of recording an Electrocardiography(ECG). To control the Biosignal PI software two different GUIs were implemented. One GUI extending the original software GUI with the added feature of making it able to perform HRV-Analysis on the Raspberry PI. This GUI requires a mouse and computer screen to function. To be able to control the Biosignal PI without mouse the project also created a GUI for the PiFace Control and Display. The PiFace GUI enables the user to collect and store ECG signals without the need of an big computer screen, increasing the mobility of the Biosignal PI device.   To help with the development process and also to make the project more compliant with the Medical Device Directive a couple of development tools were implemented such as a CMake build system, integrating the project with the Googletest testing framework for automated testing and the implementation of the document generator software Doxygen to be able to create an Software Documentation.    The Biosignal PI software developed in this thesis is available through Github at https://github.com/biosignalpi/Version-A1-Rapsberry-PI / Allt eftersom världens befolkning åldras, ställs sjukvården inför nya utmaningar i att behandla fler patienter till en lägre kostnad än idag. En trend för att lösa detta problem är att utöka möjligheterna till vård i hemmet.För att kunna göra detta finns det ett ökande behov av säkra och kostnadseffektiva patientövervakningssystem. Biosignal PI är ett pågående projekt med öppen källkod som skapats för att utveckla en flexibel och prisvärd plattform för utveckling av fristående enheter som kan mäta och bearbeta olika fysiologiska signaler. Detta examensarbete genomfördes vid institutionen för medicinska sensorer, signaler och system vid Skolan för Teknik och Hälsa. Projektet syftade till att vidareutveckla den befintliga mjukvaran för Biosignal PI genom att skapa ett nytt flexibelt mjukvaruramverk som kan användas för mätning och bearbetning av olika typer av biosignaler.Projektet syftade också till att utvidga mjukvaran och lägga till funktioner för att kunna genomföra hjärtfrekvensvariabilitets(HRV) analys i Biosignal PIs mjukvara, samt att utveckla ett grafiskt användargränssnitt(GUI) för hårdvarumodulen PiFace Control and Display. Projektet har utvecklat ett nytt flexibelt mjukvaruramverk för Biosignal PI. Det nya ramverket konstruerades för att abstrahera alla hårdvaruspecifika delar in i mindre utbytbara moduler, med tanken att modulerna ska vara oberoende i hur de hanterar sin specifika uppgift. På så sätt ska det vara möjligt att göra ändringar i Biosignal PIs programvara utan att behöva skriva om hela mjukvaran.Det nyutvecklade Biosignal PI ramverket implementerades i det befintliga hårdvaru systemet, som består av en Raspberry PI, liten och prisvärd enkortsdator, ansluten till ADAS1000, en analog hårdvarumodul med möjlighet att registrera ett elektrokardiografi(EKG/ECG). För att kontrollera Biosignal PI programmet har två olika grafiska användargränssnitt skapats.Det ena gränssnitt är en utvidgning av original programvaran med tillagd funktionalitet för att kunna göra HRV-Analys på Raspberry PI, detta gränssnitt kräver dock mus och dataskärm för att kunna användas.För att kunna styra Biosignal PI utan mus och skärm skapades det även ett gränssnitt för PiFace Control and Display. PiFace gränssnittet gör det möjligt för användaren att samla in och lagra EKG-signaler utan att behöva en stor datorskärm, på så sätt kan man öka Biosignal PI systemets mobilitet. För att underlätta utvecklingsprocessen, samt göra projektet mer förenligt med det medicintekniska regelverket, har ett par utvecklingsverktyg integrerats till Biosignal PI projektet såsom CMake för kontroll av kompileringsprocessen, test ramverket Googletest för automatiserad testning samt integrering med dokumentations generatorn Doxygen för att kunna skapa en dokumentation av mjukvaran.

Page generated in 0.4378 seconds