• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 19
  • 17
  • 14
  • 10
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 183
  • 80
  • 68
  • 53
  • 41
  • 34
  • 29
  • 26
  • 26
  • 17
  • 17
  • 17
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Développement d'un alphabet structural intégrant la flexibilité des structures protéiques / Development of a structural alphabet integrating the flexibility of protein structures

Sekhi, Ikram 29 January 2018 (has links)
L’objectif de cette thèse est de proposer un Alphabet Structural (AS) permettant une caractérisation fine et précise des structures tridimensionnelles (3D) des protéines, à l’aide des chaînes de Markov cachées (HMM) qui permettent de prendre en compte la logique issue de l’enchaînement des fragments structuraux en intégrant l’augmentation des conformations 3D des structures protéiques désormais disponibles dans la banque de données de la Protein Data Bank (PDB). Nous proposons dans cette thèse un nouvel alphabet, améliorant l’alphabet structural HMM-SA27,appelé SAFlex (Structural Alphabet Flexibility), dans le but de prendre en compte l’incertitude des données (données manquantes dans les fichiers PDB) et la redondance des structures protéiques. Le nouvel alphabet structural SAFlex obtenu propose donc un nouveau modèle d’encodage rigoureux et robuste. Cet encodage permet de prendre en compte l’incertitude des données en proposant trois options d’encodages : le Maximum a posteriori (MAP), la distribution marginale a posteriori (POST)et le nombre effectif de lettres à chaque position donnée (NEFF). SAFlex fournit également un encodage consensus à partir de différentes réplications (chaînes multiples, monomères et homomères) d’une même protéine. Il permet ainsi la détection de la variabilité structurale entre celles-ci. Les avancées méthodologiques ainsi que l’obtention de l’alphabet SAFlex constituent les contributions principales de ce travail de thèse. Nous présentons aussi le nouveau parser de la PDB (SAFlex-PDB) et nous démontrons que notre parser a un intérêt aussi bien sur le plan qualitatif (détection de diverses erreurs)que quantitatif (rapidité et parallélisation) en le comparant avec deux autres parsers très connus dans le domaine (Biopython et BioJava). Nous proposons également à la communauté scientifique un site web mettant en ligne ce nouvel alphabet structural SAFlex. Ce site web représente la contribution concrète de cette thèse alors que le parser SAFlex-PDB représente une contribution importante pour le fonctionnement du site web proposé. Cette caractérisation précise des conformations 3D et la prise en compte de la redondance des informations 3D disponibles, fournies par SAFlex, a en effet un impact très important pour la modélisation de la conformation et de la variabilité des structures 3D, des boucles protéiques et des régions d’interface avec différents partenaires, impliqués dans la fonction des protéines / The purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility
172

The law of malpractice liability in clinical psychiatry : methodology, foundations and applications

Steyn, Carel Roché 11 1900 (has links)
As a point of departure in this inherently interdisciplinary endeavour, the concept "Holistic Multidisciplinary Management" ("HMM") is introduced a.s a macrocosmic adaption of principles of project management. In line with HMM, a number of submissions regarding terminology and definitions in the interdisciplinary context of medicine (and particularly clinical psychiatry) and law, are made, and the foundations of medical malpractice are examined. Building on the various foundations laid, specific types of conduct that can constitute clinical-psychiatric malpractice, are addressed. A common theme that emerges in the various contexts covered, is that the psychiatrist must negotiate various proverbial tightropes, involving inter alia tensions between restraint and freedom, excessive and insufficient medication, becoming too involved and not being involved enough with clients, as well as client confidentiality and the duty to warn third parties. It is concluded that law and medicine. must work harmoniously together to establish appropriate balance. This can be achieved only if mutual understanding and integrated functioning are promoted and translated into practice. / Law / LL.M.
173

MeLos: Analysis and Modelling of Speech Prosody and Speaking Style

Obin, Nicolas 23 June 2011 (has links) (PDF)
Cette thèse a pour objet la modélisation de la prosodie dans le cadre de la synthèse de la parole. Nous présenterons MeLos : un système complet d'analyse et de modélisation de la prosodie, "la musique de la parole". L'objectif de cette thèse est de modéliser la stratégie, les alternatives, et le style de parole d'un locuteur pour permettre une synthèse de parole naturelle, expressive, et variée. Nous présenterons un système unifié fondé sur des modèles de Markov cachés (HMMs) à observation discrète/continue pour modéliser les caractéristiques symbolique et acoustique de la prosodie : 1) Une chaîne de traitement linguistique de surface et profonde sera introduite pour enrichir la description des caractéristiques du texte. 2) Un modèle segmental associé à la fusion de Dempster-Shafer sera utilisé pour combiner les contraintes linguistique et métrique dans la production des pauses. 3) Un modèle de trajectoire basé sur la stylisation des contours prosodiques sera présenté pour permettre de modéliser simultanément les variations à court et long terme de la F0. Le système proposé est utilisé pour modéliser les stratégies et le style d'un locuteur, et est étendu à la modélisation du style de parole par des méthodes de modélisation en contexte partagé et de normalisation du locuteur.
174

Novel fabrication and testing of light confinement devices

Ring, Josh January 2016 (has links)
The goal of this project is to study novel nanoscale excitation volumes, sensitive enoughto study individual chromophores and go on to study new and exciting self assemblyapproaches to this problem. Small excitation volumes may be engineered using light con-finement inside apertures in metal films. These apertures enhance fluorescence emissionrates, quantum yields, decrease fluorescence quenching, enable higher signal-to-noiseratios and allow higher concentration single chromophore fluorescence, to be studied byrestricting this excitation volume. Excitation volumes are reported on using the chro-mophore's fluorescence by utilising fluorescence correlation spectroscopy, which monitorsfluctuations in fluorescence intensity. From the correlation in time, we can find the res-idence time, the number of chromophores, the volume in which they are diffusing andtherefore the fluorescence emission efficiency. Fluorescence properties are a probe ofthe local environment, a particularly powerful tool due to the high brightness (quantumyield) fluorescent dyes and sensitive photo-detection equipment both of which are readilyavailable, (such as avalanche photodiodes and photomultiplier tubes). Novel materialscombining the properties of conducting and non-conducting materials at scales muchsmaller than the incident wavelength are known as meta-materials. These allow combi-nations of properties not usually possible in natural materials at optical frequencies. Theproperties reported so far include; negative refraction, negative phase velocity, fluorescenceemission enhancement, lensing and therefore light confinement has also been proposed tobe possible. Instead of expensive and slow lithography methods many of these materialsmay be fabricated with self assembly techniques, which are truly nanoscopic and otherwiseinaccessible with even the most sophisticated equipment. It was found that nanoscaled volumes from ZMW and HMMs based on NW arrays wereall inefficient at enhancing fluorescence. The primary cause was the reduced fluorescencelifetime reducing the fluorescence efficiency, which runs contrary to some commentatorsin the literature. NW based lensing was found to possible in the blue region of the opticalspectrum in a HMM, without the background fluorescence normally associated with a PAAtemplate. This was achieved using a pseudo-ordered array of relatively large nanowireswith a period just smaller than lambda / 2 which minimised losses. Nanowires in the traditionalregime lambda / 10 produced significant scattering and lead to diffraction, such that they werewholly unsuitable for an optical lensing application.
175

Explicit Segmentation Of Speech For Indian Languages

Ranjani, H G 03 1900 (has links)
Speech segmentation is the process of identifying the boundaries between words, syllables or phones in the recorded waveforms of spoken natural languages. The lowest level of speech segmentation is the breakup and classification of the sound signal into a string of phones. The difficulty of this problem is compounded by the phenomenon of co-articulation of speech sounds. The classical solution to this problem is to manually label and segment spectrograms. In the first step of this two step process, a trained person listens to a speech signal, recognizes the word and phone sequence, and roughly determines the position of each phonetic boundary. The second step involves examining several features of the speech signal to place a boundary mark at the point where these features best satisfy a certain set of conditions specific for that kind of phonetic boundary. Manual segmentation of speech into phones is a highly time-consuming and painstaking process. Required for a variety of applications, such as acoustic analysis, or building speech synthesis databases for high-quality speech output systems, the time required to carry out this process for even relatively small speech databases can rapidly accumulate to prohibitive levels. This calls for automating the segmentation process. The state-of-art segmentation techniques use Hidden Markov Models (HMM) for phone states. They give an average accuracy of over 95% within 20 ms of manually obtained boundaries. However, HMM based methods require large training data for good performance. Another major disadvantage of such speech recognition based segmentation techniques is that they cannot handle very long utterances, Which are necessary for prosody modeling in speech synthesis applications. Development of Text to Speech (TTS) systems in Indian languages has been difficult till date owing to the non-availability of sizeable segmented speech databases of good quality. Further, no prosody models exist for most of the Indian languages. Therefore, long utterances (at the paragraph level and monologues) have been recorded, as part of this work, for creating the databases. This thesis aims at automating segmentation of very long speech sentences recorded for the application of corpus-based TTS synthesis for multiple Indian languages. In this explicit segmentation problem, we need to force align boundaries in any utterance from its known phonetic transcription. The major disadvantage of forcing boundary alignments on the entire speech waveform of a long utterance is the accumulation of boundary errors. To overcome this, we force boundaries between 2 known phones (here, 2 successive stop consonants are chosen) at a time. Here, the approach used is silence detection as a marker for stop consonants. This method gives around 89% (for Hindi database) accuracy and is language independent and training free. These stop consonants act as anchor points for the next stage. Two methods for explicit segmentation have been proposed. Both the methods rely on the accuracy of the above stop consonant detection stage. Another common stage is the recently proposed implicit method which uses Bach scale filter bank to obtain the feature vectors. The Euclidean Distance of the Mean of the Logarithm (EDML) of these feature vectors shows peaks at the point where the spectrum changes. The method performs with an accuracy of 87% within 20 ms of manually obtained boundaries and also achieves a low deletion and insertion rate of 3.2% and 21.4% respectively, for 100 sentences of Hindi database. The first method is a three stage approach. The first is the stop consonant detection stage followed by the next, which uses Quatieri’s sinusoidal model to classify sounds as voiced/unvoiced within 2 successive stop consonants. The final stage uses the EDML function of Bach scale feature vectors to further obtain boundaries within the voiced and unvoiced regions. It gives a Frame Error Rate (FER) of 26.1% for Hindi database. The second method proposed uses duration statistics of the phones of the language. It again uses the EDML function of Bach scale filter bank to obtain the peaks at the phone transitions and uses the duration statistics to assign probability to each peak being a boundary. In this method, the FER performance improves to 22.8% for the Hindi database. Both the methods are equally promising for the fact that they give low frame error rates. Results show that the second method outperforms the first, because it incorporates the knowledge of durations. For the proposed approaches to be useful, manual interventions are required at the output of each stage. However, this intervention is less tedious and reduces the time taken to segment each sentence by around 60% as compared to the time taken for manual segmentation. The approaches have been successfully tested on 3 different languages, 100 sentences each -Kannada, Tamil and English (we have used TIMIT database for validating the algorithms). In conclusion, a practical solution to the segmentation problem is proposed. Also, the algorithm being training free, language independent (ES-SABSF method) and speaker independent makes it useful in developing TTS systems for multiple languages reducing the segmentation overhead. This method is currently being used in the lab for segmenting long Kannada utterances, spoken by reading a set of 1115 phonetically rich sentences.
176

Analysis of genetic polymorphisms for statistical genomics: tools and applications

Morcillo Suárez, Carlos 19 December 2011 (has links)
New approaches are needed to manage and analyze the enormous quantity of biological data generated by modern technologies. Existing solutions are often fragmented and uncoordinated and, thus, they require considerable bioinformatics skills from users. Three applications have been developed illustrating different strategies to help users without extensive IT knowledge to take maximum profit from their data. SNPator is an easy-to-use suite that integrates all the usual tools for genetic association studies: from initial quality control procedures to final statistical analysis. CHAVA is an interactive visual application for CNV calling from aCGH data. It presents data in a visual way that helps assessing the quality of the calling and assists in the process of optimization. Haplotype Association Pattern Analysis visually presents data from exhaustive genomic haplotype associations, so that users can recognize patterns of possible associations that cannot be detected by single-SNP tests. / Calen noves aproximacions per la gestió i anàlisi de les enormes quantitats de dades biològiques generades per les tecnologies modernes. Les solucions existents, sovint fragmentaries i descoordinades, requereixen elevats nivells de formació bioinformàtica. Hem desenvolupat tres aplicacions que il•lustren diferents estratègies per ajudar als usuaris no experts en informàtica a aprofitar al màxim les seves dades. SNPator és un paquet de fàcil us que integra les eines usades habitualment en estudis de associació genètica: des del control de qualitat fins les anàlisi estadístiques. CHAVA és una aplicació visual interactiva per a la determinació de CNVs a partir de dades aCGH. Presenta les dades visualment per ajudar a valorar la qualitat de les CNV predites i ajudar a optimitzar-la. Haplotype Pattern Analysis presenta dades d’anàlisi d’associació haplotípica de forma visual per tal que els usuaris puguin reconèixer patrons de associacions que no es possible detectar amb tests de SNPs individuals.
177

Sélection de paramètres acoustiques pertinents pour la reconnaissance de la parole / Relevant acoustic feature selection for speech recognition

Hacine-Gharbi, Abdenour 09 December 2012 (has links)
L’objectif de cette thèse est de proposer des solutions et améliorations de performance à certains problèmes de sélection des paramètres acoustiques pertinents dans le cadre de la reconnaissance de la parole. Ainsi, notre première contribution consiste à proposer une nouvelle méthode de sélection de paramètres pertinents fondée sur un développement exact de la redondance entre une caractéristique et les caractéristiques précédemment sélectionnées par un algorithme de recherche séquentielle ascendante. Le problème de l’estimation des densités de probabilités d’ordre supérieur est résolu par la troncature du développement théorique de cette redondance à des ordres acceptables. En outre, nous avons proposé un critère d’arrêt qui permet de fixer le nombre de caractéristiques sélectionnées en fonction de l’information mutuelle approximée à l’itération j de l’algorithme de recherche. Cependant l’estimation de l’information mutuelle est difficile puisque sa définition dépend des densités de probabilités des variables (paramètres) dans lesquelles le type de ces distributions est inconnu et leurs estimations sont effectuées sur un ensemble d’échantillons finis. Une approche pour l’estimation de ces distributions est basée sur la méthode de l’histogramme. Cette méthode exige un bon choix du nombre de bins (cellules de l’histogramme). Ainsi, on a proposé également une nouvelle formule de calcul du nombre de bins permettant de minimiser le biais de l’estimateur de l’entropie et de l’information mutuelle. Ce nouvel estimateur a été validé sur des données simulées et des données de parole. Plus particulièrement cet estimateur a été appliqué dans la sélection des paramètres MFCC statiques et dynamiques les plus pertinents pour une tâche de reconnaissance des mots connectés de la base Aurora2. / The objective of this thesis is to propose solutions and performance improvements to certain problems of relevant acoustic features selection in the framework of the speech recognition. Thus, our first contribution consists in proposing a new method of relevant feature selection based on an exact development of the redundancy between a feature and the feature previously selected using Forward search algorithm. The estimation problem of the higher order probability densities is solved by the truncation of the theoretical development of this redundancy up to acceptable orders. Moreover, we proposed a stopping criterion which allows fixing the number of features selected according to the mutual information approximated at the iteration J of the search algorithm. However, the mutual information estimation is difficult since its definition depends on the probability densities of the variables (features) in which the type of these distributions is unknown and their estimates are carried out on a finite sample set. An approach for the estimate of these distributions is based on the histogram method. This method requires a good choice of the bin number (cells of the histogram). Thus, we also proposed a new formula of computation of bin number that allows minimizing the estimator bias of the entropy and mutual information. This new estimator was validated on simulated data and speech data. More particularly, this estimator was applied in the selection of the static and dynamic MFCC parameters that were the most relevant for a recognition task of the connected words of the Aurora2 base.
178

The law of malpractice liability in clinical psychiatry : methodology, foundations and applications

Steyn, Carel Roché 11 1900 (has links)
As a point of departure in this inherently interdisciplinary endeavour, the concept "Holistic Multidisciplinary Management" ("HMM") is introduced a.s a macrocosmic adaption of principles of project management. In line with HMM, a number of submissions regarding terminology and definitions in the interdisciplinary context of medicine (and particularly clinical psychiatry) and law, are made, and the foundations of medical malpractice are examined. Building on the various foundations laid, specific types of conduct that can constitute clinical-psychiatric malpractice, are addressed. A common theme that emerges in the various contexts covered, is that the psychiatrist must negotiate various proverbial tightropes, involving inter alia tensions between restraint and freedom, excessive and insufficient medication, becoming too involved and not being involved enough with clients, as well as client confidentiality and the duty to warn third parties. It is concluded that law and medicine. must work harmoniously together to establish appropriate balance. This can be achieved only if mutual understanding and integrated functioning are promoted and translated into practice. / Law / LL.M.
179

GPS-Free UAV Geo-Localization Using a Reference 3D Database

Karlsson, Justus January 2022 (has links)
The goal of this thesis has been global geolocalization using only visual input and a 3D database for reference. In recent years Convolutional Neural Networks (CNNs) have seen huge success in the task of classifying images. The flattened tensors at the final layers of a CNN can be viewed as vectors describing different input image features. Two networks were trained so that satellite and aerial images taken from different views of the same location had feature vectors that were similar. The networks were also trained so that images taken from different locations had different feature vectors. After training, the position of a given aerial image can then be estimated by finding the satellite image with a feature vector that is the most similar to that of the aerial image.  A previous method called Where-CNN was used as a baseline model. Batch-Hard triplet loss, the Adam optimizer, and a different CNN backbone were tested as possible augmentations to this method. The models were trained on 2640 different locations in Linköping and Norrköping. The models were then tested on a sequence of 4411 query images along a path in Jönköping. The search region had 1449 different locations constituting a total area of 24km2.  In Top-1% accuracy, there was a significant improvement over the baseline, increasing from 61.62% accuracy to 88.62%. The environment was modeled as a Hidden Markov Model to filter the sequence of guesses. The Viterbi algorithm was then used to find the most probable path. This filtering procedure reduced the average error along the path from 2328.0 m to just 264.4 m for the best model. Here the baseline had an average error of 563.0 m after filtering.  A few different 3D methods were also tested. One drawback was that no pretrained weights existed for these models, as opposed to the 2D models, which were pretrained on the ImageNet dataset. The best 3D model achieved a Top-1% accuracy of 70.41%. It should be noted that the best 2D model without using any pretraining achieved a lower Top-1% accuracy of 49.38%. In addition, a 3D method for efficiently doing convolution on sparse 3D data was presented. Compared to the straight-forward method, it was almost 2.5 times faster while still having comparable accuracy at individual query prediction.  While there was a significant improvement over the baseline, it was not significant enough to provide reliable and accurate localization for individual images. For global navigation, using the entire Earth as search space, the information in a 2D image might not be enough to be uniquely identifiable. However, the 3D CNN techniques tested did not improve the results of the pretrained 2D models. The use of more data and experimentation with different 3D CNN architectures is a direction in which further research would be exciting.
180

PROGRAM ANOMALY DETECTION FOR INTERNET OF THINGS

Akash Agarwal (13114362) 01 September 2022 (has links)
<p>Program anomaly detection — modeling normal program executions to detect deviations at runtime as cues for possible exploits — has become a popular approach for software security. To leverage high performance modeling and complete tracing, existing techniques however focus on subsets of applications, e.g., on system calls or calls to predefined libraries. Due to limited scope, it is insufficient to detect subtle control-oriented and data-oriented attacks that introduces new illegal call relationships at the application level. Also such techniques are hard to apply on devices that lack a clear separation between OS and the application layer. This dissertation advances the design and implementation of program anomaly detection techniques by providing application context for library and system calls making it powerful for detecting advanced attacks targeted at manipulating intra- and inter-procedural control-flow and decision variables. </p> <p><br></p> <p>This dissertation has two main parts. The first part describes a statically initialized generic calling context program anomaly detection technique LANCET based on Hidden Markov Modeling to provide security against control-oriented attacks at program runtime. It also establishes an efficient execution tracing mechanism facilitated through source code instrumentation of applications. The second part describes a program anomaly detection framework EDISON to provide security against data-oriented attacks using graph representation learning and language models for intra and inter-procedural behavioral modeling respectively.</p> <p><br> This dissertation makes three high-level contributions. First, the concise descriptions demonstrates the design, implementation and extensive evaluation of an aggregation-based anomaly detection technique using fine-grained generic calling context-sensitive modeling that allows for scaling the detection over entire applications. Second, the precise descriptions show the design, implementation, and extensive evaluation of a detection technique that maps runtime traces to the program’s control-flow graph and leverages graphical feature representation to learn dynamic program behavior. Finally, this dissertation provides details and experience for designing program anomaly detection frameworks from high-level concepts, design, to low-level implementation techniques.</p>

Page generated in 0.0737 seconds