• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 13
  • 9
  • 8
  • 6
  • 6
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 122
  • 122
  • 30
  • 23
  • 19
  • 19
  • 16
  • 16
  • 15
  • 15
  • 12
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Kvantitativní charakteristika herního výkonu v basketbalu českého národního družstva žen na olympijských hrách 2012 a jeho porovnání s výkonem na mistrovství světa 2010 / Quantitative characteristics of gaming performance in basketball Czech National team of women at the Olympic Games in 2012 and its comparison with the performance of the 2010 World Cup

Bendl, Martin January 2014 (has links)
Title of the thesis: Quantitative characteristics of gaming performance in basketball Czech National team of women at the Olympic Games in 2012 and its comparison with the performance of the 2010 World Cup. The goal of the paper: The goal of this paper is a characteristics of a team game performance of our national women's team at the Olympic Games in 2012 and compared with the characteristics of the same team at the World Cup 2010. Data collection method used will be observed, the specification of analysis gaming performance indicators and quantitative evidence type. Methodology: The Quantitative analysis of a team performance videorecording (KVANTÝM). Results: The results are intended to show the value of individual indicators in comparison with the indicators of opponents and their own pointers to the previous top event. Keywords: Basketball, team game performance, quantitative video analysis, comparison, differences, results, success.
72

Systém Swim Smooth a plavecká specializace / Swim Smooth System and Swimming Specialization

Minaříková, Gabriela January 2012 (has links)
Title: Swim Smooth System and Swimming Specialization Objectives: Along with The Australian Swim Smooth Swim Type Coaching System criteria assign particular swim types to natural predispositions of selected elite swimmers and thus confirm suitability of their specialization. Methods: This work was carried out as a methodological study that explores both the suitability of specialization in a selected group of elite swimmers and the potential benefits of the Swim Smooth Swim Type System compared to the more commonly used method of one swim type. The study involved measuring, displaying, monitoring, evaluation, analysis and correlation. We measured the stroke length and stroke rate. We observed individual style of freestyle technique. We evaluated the Swim Types, somatotype, personality traits of extroversion and swimmers' potential to either speed, middle distance or endurance performance. We performed analysis of the development of swimmers' personal records in their main discipline. We determined the degree of correlation among particular Swim Type criteria that determine Swim Smooth swim types. We evaluated the swimmers' specialization suitability and outlined way of their further development. Results: Swim Smooth Swim Type System revealed the inadequacy of specialization of one swimmer. For all...
73

Descrição do padrão de desenvolvimento motor em lactentes com síndrome de Down pela avaliação dos movimentos generalizados / Description of motor development pattern by General Movements assessment in infants with Down syndrome

Herrero, Dafne 20 February 2017 (has links)
Introdução:Tão importante quanto saber \"o que\" está mudando no desenvolvimento motor, é \"como\" e \"quando\" ocorrem essas mudanças. A falta desses dados ainda não permite a descrição desse processo de forma linear e contínua em grupos com distúrbios de movimento, tais como recém-nascidos com síndromede Down. A memória pode ser um fator de diferenciação no caso de desempenho motor e a construção de aquisição da trajetória do movimento. Portanto, aintegraçãomotora-cognitiva,e a ativação mnemônica poderiamser observadas na apresentação de uma riqueza do repertório motor demovimentação espontânea dos lactentes. Objetivo geral:Avaliara qualidade da movimentação espontânea em lactentescom síndrome de Down. Objetivos específicos: a) descrever as características genéticas e ambientais sobreas características deste grupo delactentes e o atual processo deintervenção; b) analisar a aplicabilidade da avaliaçãoGeneral Movementsem lactentes até oscinco meses de vidaem paísesde baixae médiarenda e; c) avaliar a movimentação espontânea como facilitadora do vínculo mãe-bebê. Métodos:revisão e elaboração da temática,por busca em bases de dados científicos, para estruturar de forma factível a execução da avaliação e aplicação do questionário neste grupo de lactentes. Foi um estudo exploratório,com a avaliaçãopresencial e por gravações de vídeo, de 47 lactentes, menores de 5 mesescom síndrome de Down pela avaliação dos General Movements; além do preenchimento de um questionário cominformaçõestais como: o momento em que o diagnóstico foidito para os pais, o tempo de amamentação, o período que permaneceram na unidade de Cuidados Intensivos, se os pais estavamempregados e a idade dos pais. A avaliação foi feita na idade Fidgety(de 11 a 20 semanas após a idade do termo). Os locais de coleta dos dados foram: 24 lactentes doHospital Infantil Darcy Vargas e a Instituição APAE de São Paulo (o hospital pertence ao SUS -Sistema Único de Saúde),ambos representam locais de referência no atendimento de crianças com síndrome 7de Downno Brasil, e 23 lactentes do banco de dados por vídeo do Centro de Estudos de General Movements da Áustria. Resultados:O escore da avaliação foi significativamente menor do que em lactentescom desfecho neurológico comum. Quatorze lactentes com Síndrome de Down apresentaram Fidgety Movementsnormais, 13não apresentaram FMe 20 apresentaram FM exagerados, muito rápidos ou muito lentos. A falta de movimentos na linha média e várias posturas atípicas foram observadas. Nem o parto prematuro nem a cardiopatia congênita estavam associados apresença dos FM oua mobilidade reduzida. Conclusões:Para a indicação do uso dessa avaliação de baixo custo no Brasil:a observação motora contribui para a avaliaçãoqualitativado movimento global efuncional dolactentede uma maneira assertiva, rápida e não invasiva.Quanto arelevância daavaliação da qualidade dosmovimentosglobaisemlactentes com síndrome de Down:aheterogeneidade nos FM esuas características peculiares indicamque a intervenção deve serfeita o quanto antes para estimular e aprimorar o repertoriomotor.Os dados apontam que a identificação dos FM em crianças com síndrome de Down, pode ser um marcador clínico para o planejamento e intervenção fisioterapêutica singular; contudo sua ausência não pode ser utilizada como indicador de normalidade motorafuncionalou pretexto para adiarintervenções clínicas. Uma segunda conclusão aponta para inexistência de associação entre cardiopatias congênitas e FM em crianças com síndrome de Down. / Introduction-Asimportant as knowing \"what\" is changing in motor development, it is \"how\" and \"when\" these changes occur. The lack of these data still does not allow the description of this process in a linear and continuous way in groups with movement disorders, such as Down syndrome newborns. The memory can be a factor of differentiation in the case of motor performance and the construction of acquisition of the trajectory of the movement. Therefore, integration and its complete activation can be observed in the presentation of a rich motor repertoire in the spontaneous movement of infants. General purpose-To assessthe quality of spontaneous movement in infants with Down syndrome. Specific purposes-a) to describe information about the characteristics of this group of infants and the process of intervention in low-income countries; B) to analyze the applicability of the General Movements instrument in infants up to five months of life in low-and middle-income countries; C) to assess the spontaneous movement as facilitator of the mother-baby bond. Methods-review and elaboration of the thematic by searching in scientific databases at a first moment. Subsequent there was an exploratory study was conducted with the evaluation of 47 infants with Down syndrome by General Movements; in addition to filling out a questionnaire of information such as: the time the diagnosis is told to the parents, the time of breastfeeding, the period they stayed in the Intensive Care unit, the parents are employed, and the age of the parents. The evaluation was done at the Fidgety age (from 11 to 20 weeks after the term age). The data collection sites were: 24 infants from Darcy Vargas Children\'s Hospital and the APAE Institution of São Paulo (the hospital belongs to SUS -Sistema Único de Saúde),both represent reference sites for the care of children with Down syndrome, and 23infants from Austria data recorded. Results-The assessment score was significantly lower than in infants with a common neurological outcome. Fourteen infants with Down syndrome had normal FM, 13 had no FM and 20 had FM exaggerated, very fast or very slow. The 10lack of movements for the midline and several atypical postures were observed. Neither preterm birth nor congenital heart disease were related to FM or reduced mobility. Conclusions-for the indication in the use of the evaluation in Brazil:the researchers demonstrated that the quality motor observation contributes to the functional evaluation of the young nervous system. The application of GM assessment to vulnerable populations such as Brazil is therefore highly recommended. For the relevance of the evaluation applied to this group of infants with Down\'s syndrome:the research showsthat the heterogeneity in FM and its peculiar characteristicsjustify the early intervention.
74

Computer vision for continuous plankton monitoring / Visão computacional para o monitoramento contínuo de plâncton

Matuszewski, Damian Janusz 04 April 2014 (has links)
Plankton microorganisms constitute the base of the marine food web and play a great role in global atmospheric carbon dioxide drawdown. Moreover, being very sensitive to any environmental changes they allow noticing (and potentially counteracting) them faster than with any other means. As such they not only influence the fishery industry but are also frequently used to analyze changes in exploited coastal areas and the influence of these interferences on local environment and climate. As a consequence, there is a strong need for highly efficient systems allowing long time and large volume observation of plankton communities. This would provide us with better understanding of plankton role on global climate as well as help maintain the fragile environmental equilibrium. The adopted sensors typically provide huge amounts of data that must be processed efficiently without the need for intensive manual work of specialists. A new system for general purpose particle analysis in large volumes is presented. It has been designed and optimized for the continuous plankton monitoring problem; however, it can be easily applied as a versatile moving fluids analysis tool or in any other application in which targets to be detected and identified move in a unidirectional flux. The proposed system is composed of three stages: data acquisition, targets detection and their identification. Dedicated optical hardware is used to record images of small particles immersed in the water flux. Targets detection is performed using a Visual Rhythm-based method which greatly accelerates the processing time and allows higher volume throughput. The proposed method detects, counts and measures organisms present in water flux passing in front of the camera. Moreover, the developed software allows saving cropped plankton images which not only greatly reduces required storage space but also constitutes the input for their automatic identification. In order to assure maximal performance (up to 720 MB/s) the algorithm was implemented using CUDA for GPGPU. The method was tested on a large dataset and compared with alternative frame-by-frame approach. The obtained plankton images were used to build a classifier that is applied to automatically identify organisms in plankton analysis experiments. For this purpose a dedicated feature extracting software was developed. Various subsets of the 55 shape characteristics were tested with different off-the-shelf learning models. The best accuracy of approximately 92% was obtained with Support Vector Machines. This result is comparable to the average expert manual identification performance. This work was developed under joint supervision with Professor Rubens Lopes (IO-USP). / Microorganismos planctônicos constituem a base da cadeia alimentar marinha e desempenham um grande papel na redução do dióxido de carbono na atmosfera. Além disso, são muito sensíveis a alterações ambientais e permitem perceber (e potencialmente neutralizar) as mesmas mais rapidamente do que em qualquer outro meio. Como tal, não só influenciam a indústria da pesca, mas também são frequentemente utilizados para analisar as mudanças nas zonas costeiras exploradas e a influência destas interferências no ambiente e clima locais. Como consequência, existe uma forte necessidade de desenvolver sistemas altamente eficientes, que permitam observar comunidades planctônicas em grandes escalas de tempo e volume. Isso nos fornece uma melhor compreensão do papel do plâncton no clima global, bem como ajuda a manter o equilíbrio do frágil meio ambiente. Os sensores utilizados normalmente fornecem grandes quantidades de dados que devem ser processados de forma eficiente sem a necessidade do trabalho manual intensivo de especialistas. Um novo sistema de monitoramento de plâncton em grandes volumes é apresentado. Foi desenvolvido e otimizado para o monitoramento contínuo de plâncton; no entanto, pode ser aplicado como uma ferramenta versátil para a análise de fluídos em movimento ou em qualquer aplicação que visa detectar e identificar movimento em fluxo unidirecional. O sistema proposto é composto de três estágios: aquisição de dados, detecção de alvos e suas identificações. O equipamento óptico é utilizado para gravar imagens de pequenas particulas imersas no fluxo de água. A detecção de alvos é realizada pelo método baseado no Ritmo Visual, que acelera significativamente o tempo de processamento e permite um maior fluxo de volume. O método proposto detecta, conta e mede organismos presentes na passagem do fluxo de água em frente ao sensor da câmera. Além disso, o software desenvolvido permite salvar imagens segmentadas de plâncton, que não só reduz consideravelmente o espaço de armazenamento necessário, mas também constitui a entrada para a sua identificação automática. Para garantir o desempenho máximo de até 720 MB/s, o algoritmo foi implementado utilizando CUDA para GPGPU. O método foi testado em um grande conjunto de dados e comparado com a abordagem alternativa de quadro-a-quadro. As imagens obtidas foram utilizadas para construir um classificador que é aplicado na identificação automática de organismos em experimentos de análise de plâncton. Por este motivo desenvolveu-se um software para extração de características. Diversos subconjuntos das 55 características foram testados através de modelos de aprendizagem disponíveis. A melhor exatidão de aproximadamente 92% foi obtida através da máquina de vetores de suporte. Este resultado é comparável à identificação manual média realizada por especialistas. Este trabalho foi desenvolvido sob a co-orientacao do Professor Rubens Lopes (IO-USP).
75

Spatio-temporal data interpolation for dynamic scene analysis

Kim, Kihwan 06 January 2012 (has links)
Analysis and visualization of dynamic scenes is often constrained by the amount of spatio-temporal information available from the environment. In most scenarios, we have to account for incomplete information and sparse motion data, requiring us to employ interpolation and approximation methods to fill for the missing information. Scattered data interpolation and approximation techniques have been widely used for solving the problem of completing surfaces and images with incomplete input data. We introduce approaches for such data interpolation and approximation from limited sensors, into the domain of analyzing and visualizing dynamic scenes. Data from dynamic scenes is subject to constraints due to the spatial layout of the scene and/or the configurations of video cameras in use. Such constraints include: (1) sparsely available cameras observing the scene, (2) limited field of view provided by the cameras in use, (3) incomplete motion at a specific moment, and (4) varying frame rates due to different exposures and resolutions. In this thesis, we establish these forms of incompleteness in the scene, as spatio-temporal uncertainties, and propose solutions for resolving the uncertainties by applying scattered data approximation into a spatio-temporal domain. The main contributions of this research are as follows: First, we provide an efficient framework to visualize large-scale dynamic scenes from distributed static videos. Second, we adopt Radial Basis Function (RBF) interpolation to the spatio-temporal domain to generate global motion tendency. The tendency, represented by a dense flow field, is used to optimally pan and tilt a video camera. Third, we propose a method to represent motion trajectories using stochastic vector fields. Gaussian Process Regression (GPR) is used to generate a dense vector field and the certainty of each vector in the field. The generated stochastic fields are used for recognizing motion patterns under varying frame-rate and incompleteness of the input videos. Fourth, we also show that the stochastic representation of vector field can also be used for modeling global tendency to detect the region of interests in dynamic scenes with camera motion. We evaluate and demonstrate our approaches in several applications for visualizing virtual cities, automating sports broadcasting, and recognizing traffic patterns in surveillance videos.
76

Recognition of human interactions with vehicles using 3-D models and dynamic context

Lee, Jong Taek, 1983- 11 July 2012 (has links)
This dissertation describes two distinctive methods for human-vehicle interaction recognition: one for ground level videos and the other for aerial videos. For ground level videos, this dissertation presents a novel methodology which is able to estimate a detailed status of a scene involving multiple humans and vehicles. The system tracks their configuration even when they are performing complex interactions with severe occlusion such as when four persons are exiting a car together. The motivation is to identify the 3-D states of vehicles (e.g. status of doors), their relations with persons, which is necessary to analyze complex human-vehicle interactions (e.g. breaking into or stealing a vehicle), and the motion of humans and car doors to detect atomic human-vehicle interactions. A probabilistic algorithm has been designed to track humans and analyze their dynamic relationships with vehicles using a dynamic context. We have focused on two ideas. One is that many simple events can be detected based on a low-level analysis, and these detected events must contextually meet with human/vehicle status tracking results. The other is that the motion clue interferes with states in the current and future frames, and analyzing the motion is critical to detect such simple events. Our approach updates the probability of a person (or a vehicle) having a particular state based on these basic observed events. The probabilistic inference is made for the tracking process to match event-based evidence and motion-based evidence. For aerial videos, the object resolution is low, the visual cues are vague, and the detection and tracking of objects is less reliable as a consequence. Any method that requires accurate tracking of objects or the exact matching of event definition are better avoided. To address these issues, we present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle relationships with the pre-specified event definitions in a piecewise fashion. With special interest in recognizing a person getting into and out of a vehicle, we have tested our method on a subset of the VIRAT Aerial Video dataset and achieved superior results. / text
77

Self-organized traffic flows: a sequential conflict resolution approach

Hand, Troy S. 20 September 2013 (has links)
This thesis discusses the effect of sequential conflict resolution maneuvers of a continuous flow of agents through a finite control volume. Video analysis of real world traffic flows that exhibit self-organized capabilities is conducted to extract characteristics of those agents. A tool is created which stabilizes the input video and extracts motion from it using the background subtraction method. I discuss the tool in detail as I created it to be user friendly and easily modifiable for other uses. The aim of the video analysis I conduct is to determine characteristics of agents in self-organized traffic flow. Comparisons are made with agents under sequential conflict resolution schemes and those that exhibit these self-organized capabilities to determine if agents under sequential control can approach the behaviors of those in self-organized environment. Flow geometries are studied and generalized with the goal of determining stability characteristics of arbitrary flow geometries. Stability analysis includes analytical proof of bounds on the conflict resolution maneuvers.
78

Context-aware semantic analysis of video metadata

Steinmetz, Nadine January 2013 (has links)
Im Vergleich zu einer stichwortbasierten Suche ermöglicht die semantische Suche ein präziseres und anspruchsvolleres Durchsuchen von (Web)-Dokumenten, weil durch die explizite Semantik Mehrdeutigkeiten von natürlicher Sprache vermieden und semantische Beziehungen in das Suchergebnis einbezogen werden können. Eine semantische, Entitäten-basierte Suche geht von einer Anfrage mit festgelegter Bedeutung aus und liefert nur Dokumente, die mit dieser Entität annotiert sind als Suchergebnis. Die wichtigste Voraussetzung für eine Entitäten-zentrierte Suche stellt die Annotation der Dokumente im Archiv mit Entitäten und Kategorien dar. Textuelle Informationen werden analysiert und mit den entsprechenden Entitäten und Kategorien versehen, um den Inhalt semantisch erschließen zu können. Eine manuelle Annotation erfordert Domänenwissen und ist sehr zeitaufwendig. Die semantische Annotation von Videodokumenten erfordert besondere Aufmerksamkeit, da inhaltsbasierte Metadaten von Videos aus verschiedenen Quellen stammen, verschiedene Eigenschaften und Zuverlässigkeiten besitzen und daher nicht wie Fließtext behandelt werden können. Die vorliegende Arbeit stellt einen semantischen Analyseprozess für Video-Metadaten vor. Die Eigenschaften der verschiedenen Metadatentypen werden analysiert und ein Konfidenzwert ermittelt. Dieser Wert spiegelt die Korrektheit und die wahrscheinliche Mehrdeutigkeit eines Metadatums wieder. Beginnend mit dem Metadatum mit dem höchsten Konfidenzwert wird der Analyseprozess innerhalb eines Kontexts in absteigender Reihenfolge des Konfidenzwerts durchgeführt. Die bereits analysierten Metadaten dienen als Referenzpunkt für die weiteren Analysen. So kann eine möglichst korrekte Analyse der heterogen strukturierten Daten eines Kontexts sichergestellt werden. Am Ende der Analyse eines Metadatums wird die für den Kontext relevanteste Entität aus einer Liste von Kandidaten identifiziert - das Metadatum wird disambiguiert. Hierfür wurden verschiedene Disambiguierungsalgorithmen entwickelt, die Beschreibungstexte und semantische Beziehungen der Entitätenkandidaten zum gegebenen Kontext in Betracht ziehen. Der Kontext für die Disambiguierung wird für jedes Metadatum anhand der Eigenschaften und Konfidenzwerte zusammengestellt. Der vorgestellte Analyseprozess ist an zwei Hypothesen angelehnt: Um die Analyseergebnisse verbessern zu können, sollten die Metadaten eines Kontexts in absteigender Reihenfolge ihres Konfidenzwertes verarbeitet werden und die Kontextgrenzen von Videometadaten sollten durch Segmentgrenzen definiert werden, um möglichst Kontexte mit kohärentem Inhalt zu erhalten. Durch ausführliche Evaluationen konnten die gestellten Hypothesen bestätigt werden. Der Analyseprozess wurden gegen mehrere State-of-the-Art Methoden verglichen und erzielt verbesserte Ergebnisse in Bezug auf Recall und Precision, besonders für Metadaten, die aus weniger zuverlässigen Quellen stammen. Der Analyseprozess ist Teil eines Videoanalyse-Frameworks und wurde bereits erfolgreich in verschiedenen Projekten eingesetzt. / The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
79

Development of a Method of Analysis for Identifying an Individual Patient’s Perspective in Video-recorded Oncology Consultations

Healing, Sara 26 August 2013 (has links)
Patient-centred care has become an important model for health-care delivery, especially in cancer care. The implementation of this model includes patient-centred communication between the clinician and his or her patient. However, most research on patient-centred communication focuses on the clinicians’ initiative: what clinicians should do and what information they should seek to elicit from patients. It is equally important to recognize what each individual patient can contribute about his or her unique perspective on the disease, its treatment, and the effects on what is important to this patient. This thesis reports the development of a system for analyzing over 1500 utterances made by patients in eight video-recorded oncology consultations at the British Columbia Cancer Agency, Vancouver Island Centre. The analysis distinguishes between biomedical information that the patient can provide and patient-centred information, which contributes the individual patient’s unique perspective on any aspect of his or her illness or treatment. The resulting analysis system includes detailed operational definitions with examples, a decision tree, and .eaf files in ELAN software for viewing and for recording decisions. Two psychometric tests demonstrated that the system is replicable: high inter-analyst reliability (90% agreement between independent analysts) on a random sample of the data set and cross-validation to the remainder of the data set. A supplemental idiographic analysis of each consultation illustrates the important role that patient-centred information played in these consultations. This system could be an important tool for teaching clinicians to recognize the individual information that patients can provide and its relevance to their care. / Graduate / 0992 / 0451 / 0350 / shealing@uvic.ca
80

Inhaltsbasierte Erschließung und Suche in multimedialen Objekten

Sack, Harald, Waitelonis, Jörg 25 January 2012 (has links) (PDF)
Das kulturelle Gedächtnis speichert immer gewaltigere Mengen von Informationen und Daten. Doch nur ein verschwindend geringer Teil der Inhalte ist derzeit über digitale Kanäle recherchierbar und verfügbar. Die Projekte mediaglobe und yovisto ermöglichen, den wachsenden Bestand an audiovisuellen Dokumenten auffindbar und nutzbar zu machen und begleiten Medienarchive in die digitale Zukunft. mediaglobe hat das Ziel, durch automatisierte und semantische Verfahren audiovisuelle Dokumente zur deutschen Zeitgeschichte zu erschließen und verfügbar zu machen. Die Vision von mediaglobe ist ein web-basierter Zugang zu umfassenden digitalen AV-Inhalten in Medienarchiven. Dazu bietet mediaglobe zahlreiche automatisierte Verfahren zur Analyse von audiovisuellen Daten, wie z.B. strukturelle Analyse, Texterkennung im Video, Sprachanalyse oder Genreanalyse. Der Einsatz semantischer Technologien verknüpft die Ergebnisse der AV-Analyse und verbessert qualitativ und quantitativ die Ergebnisse der Multimedia-Suche. Ein Tool zum Rechtemanagement liefert Informationen über die Verfügbarkeit der Inhalte. Innovative und intuitiv bedienbare Benutzeroberflächen machen den Zugang zu kulturellem Erbe aktiv erlebbar. mediaglobe vereinigt die Projektpartner Hasso-Plattner Institut für Softwaresystemtechnik (HPI), Medien-Bildungsgesellschaft Babelsberg, FlowWorks und das Archiv der defa Spektrum. mediaglobe wird im Rahmen des Forschungsprogramms »THESEUS – Neue Technologien für das Internet der Dienste« durch das Bundesministerium für Wirtschaft und Technologie gefördert. Die Videosuchmaschine yovisto hingegen ist spezialisiert auf Aufzeichnungen akademischer Lehrveranstaltungen und implementiert explorative und semantische Suchstrategien. yovisto unterstützt einen mehrstufigen 'explorativen' Suchprozess, in dem der Suchende die Möglichkeit erhält, den Bestand des zugrundeliegenden Medienarchivs über vielfältige Pfade entsprechend seines jeweiligen Interesses zu erkunden, so dass am Ende dieses Suchprozesses Informationen entdeckt werden, von deren Existenz der Suchende bislang nichts wusste. Um dies zu ermöglichen vereinigt yovisto automatisierte semantische Medienanalyse mit benutzergenerierten Metadaten zur inhaltlichen Erschließung von AV-Daten und ermöglicht dadurch eine punktgenaue inhaltsbasierte Suche in Videoarchiven.

Page generated in 0.0436 seconds