• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 115
  • 82
  • 33
  • 33
  • 33
  • 9
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 710
  • 82
  • 81
  • 66
  • 60
  • 54
  • 52
  • 50
  • 48
  • 47
  • 46
  • 44
  • 42
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Modellierung PBPK-relevanter Verteilungskoeffizienten organischer Stoffe

Stöckl, Stefanie 06 February 2014 (has links) (PDF)
Drei Verteilungskoeffizienten, die für physiologie-basierte Pharmakokinetik (PBPK)-Modelle relevant sind, wurden mit verschiedenen Ansätzen modelliert. Für den Blut/Luft-Verteilungskoeffizienten wurde ein auf linearen Solvatations-Energie-Beziehungen (LSER) beruhendes Literaturmodell angewendet und diskutiert. Mit einer schematischen Aufteilung des Blutkompartiments in Wasser und einen organischen Teil wurde der Blut/Luft-Verteilungskoeffizient mit einer linearen Regression von anderen Verteilungskoeffizienten vorhergesagt. Zusätzlich wurde ein Fragmentmodell entwickelt. Der Fett/Luft-Verteilungskoeffizient wurde mit dem LSER-Ansatz und mit anderen Verteilungskoeffizienten modelliert. Der Koeffizient Fett/Blut wurde aus den ersten beiden errechnet. Da der inverse dimensionslose Henry-Koeffizient Wasser/Luft-Verteilungskoeffizient bei der Blut/Luft-Modellierung zum Einsatz kommt und dieser aus dem Dampfdruck und der Wasserlöslichkeit gewonnen werden kann, wurde der Dampfdruck ebenfalls modelliert.
502

Key Data for the Reference and Relative Dosimetry of Radiotherapy and Diagnostic and Interventional Radiology Beams

Benmakhlouf, Hamza January 2015 (has links)
Accurate dosimetry is a fundamental requirement for the safe and efficient use of radiation in medical applications. International Codes of Practice, such as IAEA TRS-398 (2000) for radiotherapy beams and IAEA TRS-457 (2007) for diagnostic radiology beams, provide the necessary formulation for reference and relative dosimetry and the data required for their implementation. Research in recent years has highlighted the shortage of such data for radiotherapy small photon beams and for surface dose estimations in diagnostic and interventional radiology, leading to significant dosimetric errors that in some instances have jeopardized patient’s safety and treatment efficiency. The aim of this thesis is to investigate and determine key data for the reference and relative dosimetry of radiotherapy and radiodiagnostics beams. For that purpose the Monte Carlo system PENELOPE has been used to simulate the transport of radiation in different media and a number of experimental determinations have also been made. A review of the key data for radiotherapy beams published after the release of IAEA TRS-398 was conducted, and in some cases the considerable differences found were questioned under the criterion of data consistency throughout the dosimetry chain (from standards laboratories to the user). A modified concept of output factor, defined in a new international formalism for the dosimetry of small photon beams, requires corrections to dosimeter readings for the dose determination in small beams used clinically. In this work, output correction factors were determined, for Varian Clinac 6 MV photon beams and Leksell Gamma Knife Perfexion 60Co gamma-ray beams, for a large number of small field detectors, including air and liquid ionization chambers, shielded and unshielded silicon diodes and diamond detectors, all of which were simulated by Monte Carlo with great detail. Backscatter factors and ratios of mass energy-absorption coefficients required for surface (skin) determinations in diagnostic and interventional radiology applications were also determined, as well as their extension to account for non-standard phantom thicknesses and materials. A database of these quantities was created for a broad range of monoenergetic photon beams and computer codes developed to convolve the data with clinical spectra, thus enabling the determination of key data for arbitrary beam qualities. Data presented in this thesis has been contributed to the IAEA international dosimetry recommendations for small radiotherapy beams and for diagnostic radiology in paediatric patients. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 6: Manuscript.</p>
503

Multi-Unit Longitudinal Models with Random Coefficients and Patterned Correlation Structure: Modelling Issues

Ledolter, Johannes January 1999 (has links) (PDF)
The class of models which is studied in this paper, multi-unit longitudinal models, combines both the cross-sectional and the longitudinal aspects of observations. Many empirical investigations involve the analysis of data structures that are both cross-sectional (observations are taken on several units at a specific time period or at a specific location) and longitudinal (observations on the same unit are taken over time or space). Multi-unit longitudinal data structures arise in economics and business where panels of subjects are studied over time, biostatistics where groups of patients on different treatments are observed over time, and in situations where data are taken over time and space. Modelling issues in multi-unit longitudinal models with random coefficients and patterned correlation structure are illustrated in the context of two data sets. The first data set deals with short time series data on annual death rates and alcohol consumption for twenty-five European countries. The second data set deals with glaceologic time series data on snow temperature at 14 different locations within a small glacier in the Austrian Alps. A practical model building approach, consisting of model specification, estimation, and diagnostic checking, is outlined. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
504

Analyse théorique et numérique des équations de la magnétohydrodynamique : application à l'effet dynamo

Luddens, Francky 06 December 2012 (has links) (PDF)
On s'intéresse dans ce mémoire aux équations de la magnétohydrodynamique (MHD) dans des milieux hétérogènes, i.e. dans des milieux pouvant présenter des variations (éventuellement brutales) de propriétés physiques. En particulier, on met ici l'accent sur la résolution des équations de Maxwell dans des milieux avec des propriétés magnétiques inhomogènes. On présentera une méthode non standard pour résoudre ce problème à l'aide d'éléments finis de Lagrange. On évoquera ensuite l'implémentation dans le code SFEMaNS, développé depuis 2002 par J.-L. Guermond, C. Nore, J. Léorat, R. Laguerre et A. Ribeiro, ainsi que les premiers résultats obtenus dans les simulations de dynamo. Nous nous intéresserons par exemple au cas de la dynamo dite de Von Kármán, afin de comprendre l'expérience VKS2. En outre, nous aborderons des cas de dynamo en précession, ou encore le problème de la dynamo au sein d'un écoulement de Taylor-Couette.
505

Fluorine and chlorine fractionation in the sub-arc mantle : an experimental investigation

Dalou, Célia 21 January 2011 (has links) (PDF)
Volatile elements released from the subducting slab play a fundamental role during the formation of arc magmas in the mantle wedge. Advances of melt inclusion studies enlarged the data on volatile abundance in arc magmas, and it is now possible to characterize some volatile contents in arc primary magmas, in particular F and Cl. A recent study of Mt Shasta melt inclusions (LeVoyer et al., 2010) shows that fractionation of F and Cl potentially contains information about arc magma genesis. In order to trace the source of arc magmas, fluorine and chlorine partitioning was investigated. Here, I present new experimental determinations of Cl and F partition coefficients between dry and hydrous silicate melts and mantle minerals: olivine, orthopyroxene, clinopyroxene, plagioclase, garnet and also pargasite and phlogopite. The values were compiled from more than 300 measurements in 24 melting experiments, conducted between 8 and 25 kbars and between 1180 and 1430˚C. The low abundance F, Cl measurements in minerals were done by Cameca IMF 1280 at WHOI using the negative secondary ion mode. The results show that DOpx/meltF ranges from 0.123 to 0.021 and DCpx/meltF ranges from 0.153 to 0.083, while Cl partition coefficient varies from DOpx/meltCl from 0.002 to 0.069 and DCpx/meltCfrom 0.008 to 0.015, as well. Furthermore, DOl/meltF ranges from 0.116 to 0.005 and DOl/meltCl from 0.001 to 0.004; DGrt/meltF ranges from 0.012 to 0.166 and DGrt/meltCl from 0.003 to 0.087 with the increasing water amount and decreasing temperature. I also show that F is compatible in phlogopite DPhl/meltF > 1.2) while DAmp/meltF is incompatible in pargasite DAmp/meltF from 0.36 to 0.63). On the contrary, Cl is more incompatible in phlogopite (DPhl/meltCl > 1.2 on average 0.09 ± 0.02), than in pargasite (DPhl/meltCl from 0.12 to 0.38). This study demonstrates that F and Cl are substituted in specific oxygen site in minerals that lead then to be more sensitive than trace elements to crystal chemistry and water amount variations thus melting conditions. Using those new partition coefficients, I modelled melting of potential sub-arc lithologies with variable quantity aqueous-fluid. This model is able to decipher 1) amount of aqueous-fluid involved in melting, 2) melting induced by fluid or melting of an hydrous mineral-bearing source and 3) melting of either pargasite-bearing lithology or phlogopite-bearing lithology and shows that sources of some primitive melts, for instance from Italy, bear pargasite and phlogopite, while some primitve melts seem to be the results of fluid-induced melts.
506

Optimizing text-independent speaker recognition using an LSTM neural network

Larsson, Joel January 2014 (has links)
In this paper a novel speaker recognition system is introduced. Automated speaker recognition has become increasingly popular to aid in crime investigations and authorization processes with the advances in computer science. Here, a recurrent neural network approach is used to learn to identify ten speakers within a set of 21 audio books. Audio signals are processed via spectral analysis into Mel Frequency Cepstral Coefficients that serve as speaker specific features, which are input to the neural network. The Long Short-Term Memory algorithm is examined for the first time within this area, with interesting results. Experiments are made as to find the optimum network model for the problem. These show that the network learns to identify the speakers well, text-independently, when the recording situation is the same. However the system has problems to recognize speakers from different recordings, which is probably due to noise sensitivity of the speech processing algorithm in use.
507

Sur l'algèbre et la combinatoire des sous-graphes d'un graphe

Buchwalder, Xavier 30 November 2009 (has links) (PDF)
On introduit une nouvelle structure algébrique qui formalise bien les problèmes de reconstruction, assortie d'une conjecture qui permettrait de traiter directement des symétries. Le cadre fournit par cette étude permet de plus d'engendrer des relations qui ont lieu entre les nombres de sous-structures, et d'une certaine façon, la conjecture formulée affirme qu'on les obtient toutes. De plus, la généralisation des résultats précédemment obtenus pour la reconstruction permet de chercher 'a en apprécier les limites en recherchant des cas où ces relations sont optimales. Ainsi, on montre que les théorèmes de V.Müller et de L.Lovasz sont les meilleurs possibles en exhibant des cas limites. Cette généralisation aux algèbres d'invariants, déjà effectuée par P.J.Cameron et V.B.Mnukhin, permet de placer les problèmes de reconstruction en tenaille entre d'une part des relations (fournies) que l'on veut exploiter, et des exemples qui établissent l'optimalité du résultat. Ainsi, sans aucune donnée sur le groupe, le résultat de L.Lovasz est le meilleur possible, et si l'on considère l'ordre du groupe, le résultat de V.Müller est le meilleur possible.
508

Analyse mathématique et numérique de problèmes d'ondes apparaissant dans les plasmas magnétiques

Imbert-Gérard, Lise-Marie 09 September 2013 (has links) (PDF)
Cette thèse étudie les aspects mathématiques et numériques de phénomènes d'ondes dans les plasmas magnétiques. La réflectométrie, une technique de sonde des plasmas de fusion, est modélisée par les équations de Maxwell. Le tenseur de permittivité présente dans ce modèle des valeurs propres ainsi que des termes diagonaux qui s'annulent. La relation de dispersion met en évidence deux phénomènes cruciaux : coupures et résonances, lorsque le nombre d'onde s'annule ou tend vers l'infini. La partie I rassemble les résultats numériques. La grande nouveauté réside dans la définition d'une solution résonante. En effet, à cause des coefficients s'annulant continument en changeant de signe, la solution peut être singulière, i.e. avoir une composante non intégrable. Cependant, grâce au principe d'absorption limite, une solution résonante est explicitement définie comme la limite de solutions intégrables du problème régularisé. L'expression théorique de la singularité est validée par des tests numériques du passage à la limite. La partie II concerne l'approximation numérique. Elle comprend la mise en place d'une nouvelle méthode numérique adaptée aux coefficients réguliers. Celle-ci est basée sur la formulation variationnelle Ultra Faible mais nécessite des fonctions de base spécifiques, construites comme approximations locales du problème adjoint. L'analyse de convergence est effectuée en dimension un, en dimension deux la construction des fonctions de base et leur propriété d'interpolation sont détaillées. La méthode d'ordre élevé obtenue permet de simuler le phénomène de coupure tandis que simuler le phénomène de résonance en dimension deux reste un défi.
509

Accent Classification from Speech Samples by Use of Machine Learning

Carol Pedersen Unknown Date (has links)
“Accent” is the pattern of speech pronunciation by which one can identify a person’s linguistic, social or cultural background. It is an important source of inter-speaker variability and a particular problem for automated speech recognition. The aim of the study was to investigate a new computational approach to accent classification which did not require phonemic segmentation or the identification of phonemes as input, and which could therefore be used as a simple, effective accent classifier. Through a series of structured experiments this study investigated the effectiveness of Support Vector Machines (SVMs) for speech accent classification using time-based units rather than linguistically-informed ones, and compared it to the accuracy of other machine learning methods, as well as the ability of humans to classify speech according to accent. A corpus of read-speech was collected in two accents of English (Arabic and “Indian”) and used as the main datasource for the experiments. Mel-frequency cepstral coefficients were extracted from the speech samples and combined into larger units of 10 to 150ms duration, which then formed the input data for the various machine learning systems. Support Vector Machines were found to classify the samples with up to 97.5% accuracy with very high precision and recall, using samples of between 1 and 4 seconds of speech. This compared favourably with a human listener study where subjects were able to distinguish between the two accent groups with an average of 92.5% accuracy in approximately 8 seconds. Repeating the SVM experiments on a different corpus resulted in a best classification accuracy of 84.6%. Experiments using a decision tree learner and a rule-based classifier on the original corpus gave a best accuracy of 95% but results over the range of conditions were much more variable than those using the SVM. Rule extraction was performed in order to help explain the results and better inform the design of the system. The new approach was therefore shown to be effective for accent classification, and a plan for its role within various other larger speech-related contexts was developed.
510

Accent Classification from Speech Samples by Use of Machine Learning

Carol Pedersen Unknown Date (has links)
“Accent” is the pattern of speech pronunciation by which one can identify a person’s linguistic, social or cultural background. It is an important source of inter-speaker variability and a particular problem for automated speech recognition. The aim of the study was to investigate a new computational approach to accent classification which did not require phonemic segmentation or the identification of phonemes as input, and which could therefore be used as a simple, effective accent classifier. Through a series of structured experiments this study investigated the effectiveness of Support Vector Machines (SVMs) for speech accent classification using time-based units rather than linguistically-informed ones, and compared it to the accuracy of other machine learning methods, as well as the ability of humans to classify speech according to accent. A corpus of read-speech was collected in two accents of English (Arabic and “Indian”) and used as the main datasource for the experiments. Mel-frequency cepstral coefficients were extracted from the speech samples and combined into larger units of 10 to 150ms duration, which then formed the input data for the various machine learning systems. Support Vector Machines were found to classify the samples with up to 97.5% accuracy with very high precision and recall, using samples of between 1 and 4 seconds of speech. This compared favourably with a human listener study where subjects were able to distinguish between the two accent groups with an average of 92.5% accuracy in approximately 8 seconds. Repeating the SVM experiments on a different corpus resulted in a best classification accuracy of 84.6%. Experiments using a decision tree learner and a rule-based classifier on the original corpus gave a best accuracy of 95% but results over the range of conditions were much more variable than those using the SVM. Rule extraction was performed in order to help explain the results and better inform the design of the system. The new approach was therefore shown to be effective for accent classification, and a plan for its role within various other larger speech-related contexts was developed.

Page generated in 0.0632 seconds