• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 452
  • 82
  • 77
  • 47
  • 41
  • 40
  • 38
  • 20
  • 13
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 983
  • 597
  • 329
  • 263
  • 138
  • 100
  • 98
  • 71
  • 69
  • 68
  • 68
  • 66
  • 62
  • 61
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

An integrated approach to feature compensation combining particle filters and Hidden Markov Models for robust speech recognition

Mushtaq, Aleem 19 September 2013 (has links)
The performance of automatic speech recognition systems often degrades in adverse conditions where there is a mismatch between training and testing conditions. This is true for most modern systems which employ Hidden Markov Models (HMMs) to decode speech utterances. One strategy is to map the distorted features back to clean speech features that correspond well to the features used for training of HMMs. This can be achieved by treating the noisy speech as the distorted version of the clean speech of interest. Under this framework, we can track and consequently extract the underlying clean speech from the noisy signal and use this derived signal to perform utterance recognition. Particle filter is a versatile tracking technique that can be used where often conventional techniques such as Kalman filter fall short. We propose a particle filters based algorithm to compensate the corrupted features according to an additive noise model incorporating both the statistics from clean speech HMMs and observed background noise to map noisy features back to clean speech features. Instead of using specific knowledge at the model and state levels from HMMs which is hard to estimate, we pool model states into clusters as side information. Since each cluster encompasses more statistics when compared to the original HMM states, there is a higher possibility that the newly formed probability density function at the cluster level can cover the underlying speech variation to generate appropriate particle filter samples for feature compensation. Additionally, a dynamic joint tracking framework to monitor the clean speech signal and noise simultaneously is also introduced to obtain good noise statistics. In this approach, the information available from clean speech tracking can be effectively used for noise estimation. The availability of dynamic noise information can enhance the robustness of the algorithm in case of large fluctuations in noise parameters within an utterance. Testing the proposed PF-based compensation scheme on the Aurora 2 connected digit recognition task, we achieve an error reduction of 12.15% from the best multi-condition trained models using this integrated PF-HMM framework to estimate the cluster-based HMM state sequence information. Finally, we extended the PFC framework and evaluated it on a large-vocabulary recognition task, and showed that PFC works well for large-vocabulary systems also.
162

What is the Hidden Web? / Was ist das Hidden Web? Die Entstehung, Eigenschaften und gesellschaftliche Bedeutung von anonymer Kommunikation im Hidden Web.

Papsdorf, Christian 27 April 2016 (has links) (PDF)
More than two-and-a-half million people currently use the Tor network to communicate anonymously via the Internet and gain access to online media that are not accessible using standard Internet technology. This sphere of communication can be described as the hidden web. In part because this phenomenon is very recent, the subject has scarcely been studied in the social sciences. It is therefore the purpose of this paper to answer four fundamental questions: What is the hidden web? What characterises the communication sphere of the hidden web in contrast to the “normal Internet”? Which reasons can be identified to explain the development of the hidden web as a new communication sphere? And, finally, what is the social significance of the hidden web? / Über zweieinhalb Millionen Menschen nutzen gegenwärtig das Tor Network, um anonym über das Internet zu kommunizieren und Zugriff auf Online-Medien zu erhalten, die mit gewöhnlicher Internettechnik nicht nutzbar ist. Diese Kommunikationssphäre kann als Hidden Web bezeichnet werden. Unter anderem weil es sich um ein sehr junges Phänomen handelt, liegen bisher nahezu keine sozialwissenschaftlichen Erkenntnisse zu dem Thema vor. Dementsprechend werden hier vier grundlegende Fragen beantwortet: Was ist das Hidden Web? Welche Eigenschaften weist die Kommunikationssphäre des Hidden Web im Vergleich zum „normalen“ Internet auf? Welche Gründen lassen sich identifizieren, die die Entstehung des Hidden Web als neue Kommunikationssphäre erklären können? Und welche gesellschaftliche Bedeutung kommt dem Hidden Web schließlich zu?
163

Verzerrter Recall als potentielles Hindernis für Synergie bei Gruppenentscheidungen / Biased Recall as a potential obstacle for the achievement of synergy in decision-making groups

Giersiepen, Annika Nora 20 December 2016 (has links)
In Hidden Profiles gelingt es Gruppen häufig nicht, ihr Potenzial, bessere Entscheidungen als jedes ihrer Mitglieder zu treffen, zu erfüllen. Für dieses Phänomen wurden bereits verschiedene Ursachen ermittelt. Dazu gehören insbesondere Verzerrungen im Inhalt der Gruppendiskussion sowie der Bewertung von entscheidungsrelevanten Informationen durch die Gruppenmitglieder. In der vorliegenden Arbeit wird nun ein weiterer Aspekt individueller Informationsverarbeitung untersucht, dessen Verzerrung einen nachteiligen Einfluss auf die Entscheidungsqualität von Diskussionsgruppen haben könnte: der individuelle Recall bezüglich aufgabenrelevanter Informationen. Dabei werden zwei Verzerrungen postuliert: Ein Erinnerungsvorteil von Informationen, welche die ursprüngliche Präferenz des jeweiligen Gruppenmitglieds unterstützen sowie eine Verzerrung zugunsten von Informationen, die bereits vor der Diskussion verfügbar sind. Es wird angenommen, dass beide Verzerrungen einen negativen Einfluss auf die Entscheidungsqualität des Individuums und somit auch der gesamten Gruppe haben. Diese Annahmen wurden in einer Reihe von vier Experimenten und der Reanalyse zweier früherer Studien untersucht. Insgesamt wurde dabei Evidenz für einen Erinnerungsvorteil eigener, vor der Diskussion bekannter Informationen gegenüber in der Diskussion neu gelernten Informationen gefunden. Belege für einen Erinnerungsvorteil präferenzkonsistenter Informationen zeigten sich dagegen nur vereinzelt und in einer metaanalytischen Zusammenfassung nicht in signifikantem Maße. Eine experimentelle Manipulation der Erinnerungsverzerrungen liefert keinen Hinweis auf einen Zusammenhang zwischen diesen Faktoren und der Entscheidungsqualität in Hidden-Profile- Situationen. Eine Verzerrung der individuellen Erinnerung im Hinblick auf entscheidungsrelevante Informationen ist somit nach den Ergebnissen dieser Arbeit keine sinnvolle Erweiterung der bestehenden Erklärungsansätze für das Scheitern von Entscheidungsgruppen an der Realisierung von Synergien.
164

Exploring the Hidden Web

Papsdorf, Christian 14 June 2017 (has links) (PDF)
Das Forschungsprojekt „Exploring the Hidden Web. Zu den Nutzungsweisen, Eigenschaften und Spezifika anonymer Kommunikation im Internet“ ging im Rahmen des von der VolkswagenStiftung ausgeschriebenen Programms „Offen - für Außergewöhnliches“ von vier zentralen Fragestellungen aus. Erstens sollte erforscht werden, worüber im Hidden Web kommuniziert wird. Zweitens ging es darum, welche Medien dafür genutzt werden. Und drittens sollte danach gefragt werden, wie unter den Bedingungen der Anonymität das für Interaktionen notwendige Vertrauen hergestellt wird. Für diese drei Aspekte sollte viertens jeweils untersucht werden, welche Unterschiede, Gemeinsamkeiten und Schnittstellen zu frei zugänglichen, gemeinhin als Internet bezeichneten Medien („Clearnet“) bestehen. Diese Fragen wurden im Rahmen eines explorativen, qualitativen Vorgehens untersucht. / The research project “Exploring the Hidden Web. Use, features and specific character of anonymous communication on the Internet”, as a part of the VolkswagenStiftung funding initiative “Off the beaten track”, was based on four distinct issues: The central research questions pursued are (a) what the topics of communication on the Hidden Web are and (b) which media is used for the communication. Another issue building on this is (c) how, under the condition of anonymity, the trust necessary for any communication is built. Regarding these three aspects, the question is to be posed of (d) which differences, common aspects and interfaces there are with freely-accessible media, commonly referred to as the Internet (“Clearnet”). The empirical foundation of this project is an explorative, qualitative approach.
165

Structures Markoviennes cachées et modèles à corrélations conditionnelles dynamiques : extensions et applications aux corrélations d'actifs financiers / Hidden Markov Models and dynamic conditional correlations models : extensions et application to stock market time series

Charlot, Philippe 25 November 2010 (has links)
L'objectif de cette thèse est d'étudier le problème de la modélisation des changements de régime dans les modèles a corrélations conditionnelles dynamiques en nous intéressant plus particulièrement a l'approche Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet proposé de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut être vu comme un cas particulier du modèle RSDC (Regime Switching for Dynamic Correlations). Basé sur le HMM hiérarchique, notre modèle permet de capter des nuances de régimes qui sont ignorées par l'approche Markov-Switching classique.La seconde contribution propose une version Markov-switching du modèle DCC construite a partir du modèle HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les éléments de la matrice de corrélation suivent la même dynamique, notre modèle permet à tous les éléments de la matrice de corrélation d'avoir leur propre dynamique de saut. Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet propose de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut ^etre vu commeun cas particulier du modele RSDC (Regime Switching for Dynamic Correlations). Base sur le HMMhierarchique, notre modele permet de capter des nuances de regimes qui sont ignorees par l'approcheMarkov-Switching classique.La seconde contribution propose une version Markov-switching du modele DCC construite a partir dumodele HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les elementsde la matrice de correlation suivent la m^eme dynamique, notre modele permet a tous les elements de lamatrice de correlation d'avoir leur propre dynamique de saut.Dans la derniere contribution, nous proposons un modele DCC construit a partir d'un arbre dedecision. L'objectif de cet arbre est de relier le niveau des volatilites individuelles avec le niveau descorrelations. Pour cela, nous utilisons un arbre de decision Markovien cache, qui est une extension de HMM. / The objective of this thesis is to study the modelling of change in regime in the dynamic conditional correlation models. We focus particularly on the Markov-switching approach. Unlike the standard approach based on the Hidden Markov Model (HMM), we use extensions of HMM coming from probabilistic graphical models theory. This discipline has in fact proposed many derivations of the basic model to model complex structures. Thus, this thesis can be view at the interface of twodisciplines: financial econometrics and probabilistic graphical models.The first essay presents a model constructed from a hierarchical hidden Markov which allows to increase the granularity of the regimes. It can be seen as a special case of RSDC model (Regime Switching for Dynamic Correlations). Based on the hierarchical HMM, our model can capture nuances of regimes that are ignored by the classical Markov-Switching approach.The second contribution proposes a Markov-switching version of the DCC model that is built from the factorial HMM. While the classical Markov-switching approach assumes that all elements of the correlation matrix follow the same switching dynamic, our model allows all elements of the correlation matrix to have their own switching dynamic.In the final contribution, we propose a model DCC constructed based on a decision tree. The objective of this tree is to link the level of volatility with the level of individual correlations. For this, we use a hidden Markov decision tree, which is an extension of HMM.
166

Analyse conjointe de traces oculométriques et d'EEG à l'aide de modèles de Markov cachés couplés / Joint analysis of eye movements and EEGs using coupled hidden Markov

Olivier, Brice 26 June 2019 (has links)
Cette thèse consiste à analyser conjointement des signaux de mouvement des yeux et d’électroencéphalogrammes (EEG) multicanaux acquis simultanément avec des participants effectuant une tâche de lecture de recueil d'informations afin de prendre une décision binaire - le texte est-il lié à un sujet ou non? La recherche d'informations textuelles n'est pas un processus homogène dans le temps - ni d'un point de vue cognitif, ni en termes de mouvement des yeux. Au contraire, ce processus implique plusieurs étapes ou phases, telles que la lecture normale, le balayage, la lecture attentive - en termes d'oculométrie - et la création et le rejet d'hypothèses, la confirmation et la décision - en termes cognitifs.Dans une première contribution, nous discutons d'une méthode d'analyse basée sur des chaînes semi-markoviennes cachées sur les signaux de mouvement des yeux afin de mettre en évidence quatre phases interprétables en termes de stratégie d'acquisition d'informations: lecture normale, lecture rapide, lecture attentive et prise de décision.Dans une deuxième contribution, nous lions ces phases aux changements caractéristiques des signaux EEG et des informations textuelles. En utilisant une représentation en ondelettes des EEG, cette analyse révèle des changements de variance et de corrélation des coefficients inter-canaux, en fonction des phases et de la largeur de bande. En utilisant des méthodes de plongement des mots, nous relions l’évolution de la similarité sémantique au sujet tout au long du texte avec les changements de stratégie.Dans une troisième contribution, nous présentons un nouveau modèle dans lequel les EEG sont directement intégrés en tant que variables de sortie afin de réduire l’incertitude des états. Cette nouvelle approche prend également en compte les aspects asynchrones et hétérogènes des données. / This PhD thesis consists in jointly analyzing eye-tracking signals and multi-channel electroencephalograms (EEGs) acquired concomitantly on participants doing an information collection reading task in order to take a binary decision - is the text related to some topic or not ? Textual information search is not a homogeneous process in time - neither on a cognitive point of view, nor in terms of eye-movement. On the contrary, this process involves several steps or phases, such as normal reading, scanning, careful reading - in terms of oculometry - and creation and rejection of hypotheses, confirmation and decision - in cognitive terms.In a first contribution, we discuss an analysis method based on hidden semi-Markov chains on the eye-tracking signals in order to highlight four interpretable phases in terms of information acquisition strategy: normal reading, fast reading, careful reading, and decision making.In a second contribution, we link these phases with characteristic changes of both EEGs signals and textual information. By using a wavelet representation of EEGs, this analysis reveals variance and correlation changes of the inter-channels coefficients, according to the phases and the bandwidth. And by using word embedding methods, we link the evolution of semantic similarity to the topic throughout the text with strategy changes.In a third contribution, we present a new model where EEGs are directly integrated as output variables in order to reduce the state uncertainty. This novel approach also takes into consideration the asynchronous and heterogeneous aspects of the data.
167

Discovery Of Application Workloads From Network File Traces

Yadwadkar, Neeraja 12 1900 (has links) (PDF)
An understanding of Input/Output data access patterns of applications is useful in several situations. First, gaining an insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps to create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop. All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces. In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enables it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.
168

Essays in mathematical finance : modeling the futures price

Blix, Magnus January 2004 (has links)
This thesis consists of four papers dealing with the futures price process. In the first paper, we propose a two-factor futures volatility model designed for the US natural gas market, but applicable to any futures market where volatility decreases with maturity and varies with the seasons. A closed form analytical expression for European call options is derived within the model and used to calibrate the model to implied market volatilities. The result is used to price swaptions and calendar spread options on the futures curve. In the second paper, a financial market is specified where the underlying asset is driven by a d-dimensional Wiener process and an M dimensional Markov process. On this market, we provide necessary and, in the time homogenous case, sufficient conditions for the futures price to possess a semi-affine term structure. Next, the case when the Markov process is unobservable is considered. We show that the pricing problem in this setting can be viewed as a filtering problem, and we present explicit solutions for futures. Finally, we present explicit solutions for options on futures both in the observable and unobservable case. The third paper is an empirical study of the SABR model, one of the latest contributions to the field of stochastic volatility models. By Monte Carlo simulation we test the accuracy of the approximation the model relies on, and we investigate the stability of the parameters involved. Further, the model is calibrated to market implied volatility, and its dynamic performance is tested. In the fourth paper, co-authored with Tomas Björk and Camilla Landén, we consider HJM type models for the term structure of futures prices, where the volatility is allowed to be an arbitrary smooth functional of the present futures price curve. Using a Lie algebraic approach we investigate when the infinite dimensional futures price process can be realized by a finite dimensional Markovian state space model, and we give general necessary and sufficient conditions, in terms of the volatility structure, for the existence of a finite dimensional realization. We study a number of concrete applications including the model developed in the first paper of this thesis. In particular, we provide necessary and sufficient conditions for when the induced spot price is a Markov process. We prove that the only HJM type futures price models with spot price dependent volatility structures, generically possessing a spot price realization, are the affine ones. These models are thus the only generic spot price models from a futures price term structure point of view. / Diss. Stockholm : Handelshögskolan, 2004
169

Probabilistic Methods for Computational Annotation of Genomic Sequences / Probabilistische Methoden für computergestützte Genom-Annotation

Keller, Oliver 26 January 2011 (has links)
No description available.
170

Kinect įrenginiui skirtų gestų atpažinimo algoritmų tyrimas / Research of gesture recognition algorithms dedicated for kinect device

Sinkus, Skirmantas 06 August 2014 (has links)
Microsoft Kinect įrenginys išleistas tik 2010 metais. Jis buvo skirtas Microsoft Xbox 360 vaizdo žaidimų konsolei, vėliau 2012 metais buvo pristatytas Kinect ir Windows personaliniams kompiuteriams. Taigi tai palyginus naujas įrenginys ir aktualus šiai dienai. Daugiausiai yra sukurta kompiuterinių žaidimų, kurie naudoja Microsoft Kinect įrenginį, bet šį įrenginį galima panaudoti daug plačiau ne tik žaidimuose, viena iš sričių tai sportas, konkrečiau treniruotės, kurias būtų galima atlikti namuose. Šiuo metu pasaulyje yra programinės įrangos, žaidimų, sportavimo programų, kuri leidžia kontroliuoti treniruočių eigą sekdama ar žmogus teisingai atlieka treniruotėms numatytus judesius. Kadangi Lietuvoje panašios programinės įrangos nėra, taigi reikia sukurti įrangą, kuri leistų Lietuvos treneriams kurti treniruotes orientuotas į šio įrenginio panaudojimą. Šio darbo pagrindinis tikslas yra atlikti Kinect įrenginiui skirtų gestų atpažinimo algoritmų tyrimą, kaip tiksliai jie gali atpažinti gestus ar gestą. Pagrindinis dėmesys skiriamas šiai problemai, taip pat keliami, bet netyrinėjami kriterijai kaip atpažinimo laikas, bei realizacijos sunkumas. Šiame darbe sukurta programa, judesius bei gestus atpažįsta naudojant Golden Section Search algoritmą. Algoritmas palygina du modelius ar šablonus, ir jei neranda atitikmens, tai pirmasis šablonas šiek tiek pasukamas ir lyginimo procesas paleidžiamas vėl, taipogi tam tikro kintamojo dėka galime keisti algoritmo tikslumą. Taipogi... [toliau žr. visą tekstą] / Microsoft Kinect device was released in 2010. It was designed for Microsoft Xbox 360 gaming console, later on in 2012 was presented Kinect device for Windows personal computer. So this device is new and current. Many games has been created for Microsoft Kinect device, but this device could be used not only in games, one of the areas where we can use it its sport, specific training, which can be performed at home. At this moment in world are huge variety of games, software, training programs which allows user to control training course by following a person properly perform training provided movements. Since in Lithuania similar software is not available, so it is necessary to create software that would allow Lithuania coaches create training focused on the use of this device. The main goal of this work is to perform research of the Kinect device gesture recognition algorithms to study exactly how they can recognize gestures or gesture. It will focus on this issue mainly, but does not address the criteria for recognition as the time and difficulty of realization. In this paper, a program that recognizes movements and gestures are using the Golden section search algorithm. Algorhithm compares the two models or templates, and if it can not find a match, this is the first template slightly rotated and comparison process is started again, also a certain variable helping, we can modify the algorithm accuracy. Also for comparison we can use Hidden Markov models algorhithm received... [to full text]

Page generated in 0.0331 seconds