• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 8
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Index theory and groupoids for filtered manifolds

Ewert, Eske Ellen 26 October 2020 (has links)
No description available.
72

Digitální metody zpracování trojrozměrného zobrazení v rentgenové tomografii a holografické mikroskopii / The Three-Dimensional Digital Imaging Methods for X-ray Computed Tomography and Digital Holographic Microscopy

Kvasnica, Lukáš January 2015 (has links)
This dissertation thesis deals with the methods for processing image data in X-ray microtomography and digital holographic microscopy. The work aims to achieve significant acceleration of algorithms for tomographic reconstruction and image reconstruction in holographic microscopy by means of optimization and the use of massively parallel GPU. In the field of microtomography, the new GPU (graphic processing unit) accelerated implementations of filtered back projection and back projection filtration of derived data are presented. Another presented algorithm is the orientation normalization technique and evaluation of 3D tomographic data. In the part related to holographic microscopy, the individual steps of the complete image processing procedure are described. This part introduces the new orignal technique of phase unwrapping and correction of image phase damaged by the occurrence of optical vortices in the wrapped image phase. The implementation of the methods for the compensation of the phase deformation and for tracking of cells is then described. In conclusion, there is briefly introduced the Q-PHASE software, which is the complete bundle of all the algorithms necessary for the holographic microscope control, and holographic image processing.
73

Active Control of the Human Voice from a Sphere

Anderson, Monty J 01 May 2015 (has links) (PDF)
This work investigates the application of active noise control (ANC) to speech. ANC has had success reducing tonal noise. In this work, that success was extended to noise that is not completely tonal but has some tonal elements such as speech. Limitations such as causality were established on the active control of human speech. An optimal configuration for control actuators was developed for a sphere using a genetic algorithm. The optimal error sensor location was found from exploring the nulls associated with the magnitude of the radiated pressure with reference to the primary pressure field. Both numerically predicted and experimentally validated results for the attenuation of single frequency tones were shown. The differences between the numerically predicted results for attenuation with a sphere present in the pressure field and monopoles in the free-field are also discussed.The attenuation from ANC of both monotone and natural speech is shown and a discussion about the effect of causality on the results is given. The sentence “Joe took father’s shoe bench out” was used for both monotone and natural speech. Over this entire monotone speech sentence, the average attenuation was 8.6 dB with a peak attenuation of 10.6 dB for the syllable “Joe”. Natural speech attenuation was 1.1 dB for the sentence average with a peak attenuation on the syllable “bench” of 2.4 dB. In addition to the lower attenuation values for natural speech, the pressure level for the word “took” was increased by 2.3 dB. Also, the harmonic at 420 Hz in the word “father’s” of monotone speech was reduced globally up to 20 dB. Based on the results of the attenuation of monotone and natural speech, it was concluded that a reasonable amount of attenuation could be achieved on natural speech if its correlation could approach that of monotone speech.
74

Simulation aux grandes échelles des lits fluidisés circulants gaz-particule / Development of Large Eddy Simulation Approach for Simulation of Circulating Fluidized Beds

Özel, Ali 18 October 2011 (has links)
Les simulations numériques des équations d’Euler deux-fluides réalisé sur des maillages grossiers éliminent les structures fins d’écoulement gaz-solide dans les lits fluidisés. Pour précisément estimer l’hydrodynamique globale de lit, il faut proposer une modélisation qui prend en compte les effets de structure non-résolue. Dans ce but, les maillages sont raffinés pour obtenir le résultat de simulation pleinement résolue ce que les grandeurs statistiques ne modifient plus avec un autre raffinement pour le lit fluidisé périodique dilué gaz-particules sur une géométrie 3D cartésienne et ce résultat est utilisé pour tests "a priori". Les résultats de tests "a priori" montrent que l’équation filtrée de la quantité de mouvement est effectuée mais il faut prendre en compte le flux de la fraction volumique de solide de sous-maille en raison de l’interaction locale de la vitesse du gaz et la fraction volumique de solide pour la force traniée. Nous proposons les modèles fonctionnels et structurels pour le flux de la fraction volumique de solide de sous-maille. En plus, les modèles fermetures du tenseur de sous-maille de la phase dispersée sont similaires aux modèles classiquement utilisés en écoulement turbulent monophasique. Tous les modèles sont validés par test "a priori" et "a posteriori" / Eulerian two fluid approach is generally used to simulate gas-solid flows in industrial circulating fluidized beds. Because of limitation of computational resources, simulations of large vessels are usually performed by using too coarse grid. Coarse grid simulations can not resolve fine flow scales which can play an important role in the dynamic behaviour of the beds. In particular, cancelling out the particle segregation effect of small scales leads to an inadequate modelling of the mean interfacial momentum transfer between phases and particulate shear stresses by secondary effect. Then, an appropriate modelling ac counting for influences of unresolved structures has to be proposed for coarse-grid simu-lations. For this purpose, computational grids are refined to get mesh-independent result where statistical quantities do not change with further mesh refinement for a 3-D peri-odic circulating fluidized bed. The 3-D periodic circulating fluidized is a simple academic configuration where gas-solid flow conducted with A-type particles is periodically driven along the opposite direction of the gravity. The particulate momentum and agitation equations are filtered by the volume averaging and the importance of additional terms due to the averaging procedure are investigated by budget analyses using the mesh independent result. Results show that the filtered momentum equation of phases can be computed on coarse grid simulations but sub-grid drift velocity due to the sub-grid correlation between the local fluid veloc- ity and the local particle volume fraction and particulate sub-grid shear stresses must be taken into account. In this study, we propose functional and structural models for sub- grid drift velocity, written in terms of the difference between the gas velocity-solid volume fraction correlation and the multiplication of the filtered gas velocity with the filtered solid volume fraction. Particulate sub-grid shear stresses are closed by models proposed for single turbulent flows. Models’ predictabilities are investigated by a priori tests and they are validated by coarse-grid simulations of 3-D periodic circulating, dense fluidized beds and experimental data of industrial scale circulating fluidized bed in manner of a posteriori tests
75

Modélisation financière avec des processus de Volterra et applications aux options, aux taux d'intérêt et aux risques de crédit / Financial modeling with Volterra Lévy processes and applications to options pricing, interest rates and credit risk modeling

Rahouli, Sami El 28 February 2014 (has links)
Ce travail étudie des modèles financiers pour les prix d'options, les taux d'intérêts et le risque de crédit, avec des processus stochastiques à mémoire et comportant des discontinuités. Ces modèles sont formulés en termes du mouvement Brownien fractionnaire, du processus de Lévy fractionnaire ou filtré (et doublement stochastique) et de leurs approximations par des semimartingales. Leur calcul stochastique est traité au sens de Malliavin, et des formules d'Itô sont déduites. Nous caractérisons les probabilités risque neutre en termes de ces processus pour des modèles d'évaluation d'options de type de Black-Scholes avec sauts. Nous étudions également des modèles de taux d'intérêts, en particulier les modèles de Vasicek, de Cox-Ingersoll-Ross et de Heath-Jarrow-Morton. Finalement nous étudions la modélisation du risque de crédit / This work investigates financial models for option pricing, interest rates and credit risk with stochastic processes that have memory and discontinuities. These models are formulated in terms of the fractional Brownian motion, the fractional or filtered Lévy process (also doubly stochastic) and their approximations by semimartingales. Their stochastic calculus is treated in the sense of Malliavin and Itô formulas are derived. We characterize the risk-neutral probability measures in terms of these processes for options pricing models of Black-Scholes type with jumps. We also study models of interest rates, in particular the models of Vasicek, Cox-Ingersoll-Ross and Heath-Jarrow-Morton. Finally we study credit risk models
76

Détection de ruptures et mouvement Brownien multifractionnaire / Change Point Detection and multifractional Brownian motion

Fhima, Mehdi 13 December 2011 (has links)
Dans cette thèse, nous développons une nouvelle méthode de détection de ruptures "Off-line", appelée Dérivée Filtrée avec p-value, sur des paramètres d'une suite de variables aléatoires indépendantes, puis sur le paramètre de Hurst d'un mouvement Brownien multifractionnaire. Cette thèse est composée de trois articles. Dans un premier article paru dans Sequential Analysis nous posons les bases de la méthode Dérivée Filtrée avec p-value (FDpV) en l'appliquant à une suite de variables aléatoires indépendantes. La méthode a une complexité linéaire en temps et en mémoire. Elle est constituée de deux étapes. La première étape utilisant la méthode Dérivée Filtrée détecte les bons instants de ruptures, mais également certaines fausses alarmes. La deuxième étape attribue une p-value à chaque instant de rupture potentiel détecté à la première étape, et élimine les instants dont la p-value est inférieure à un certain seuil critique. Nous démontrons les propriétés asymptotiques nécessaires à la calibration de la méthode. L'efficacité de la méthode a été prouvé tant sur des données simulées que sur des données réelles. Ensuite, nous nous sommes attaqués à l'application de la méthode pour la détection de ruptures sur le paramètre de Hurst d'un mouvement Brownien multifractionnaire. Cela s'est fait en deux phases. La première phase a fait l'objet d'un article à paraitre dans ESAIM P&S où nous avons établi un Théorème Central Limite pour l'estimateur du paramètre de Hurst appelé Increment Ratio Statistic (IRS). Puis, nous avons proposé une version localisée de l'IRS et démontré un TCL local pour estimer la fonction de Hurst d'un mouvement Brownien multifractionnaire. Les preuves sont intuitives et se distinguent par leur simplicité. Elles s'appuient sur le théorème de Breuer-Major et une stratégie originale appelée "freezing of time". La deuxième phase repose sur un nouvel article soumis pour publication. Nous adaptons la méthode FDpV pour détecter des ruptures sur l'indice de Hurst d'un mouvement Brownien fractionnaire constant par morceaux. La statistique sous-jacent de l'algorithme FDpV est un nouvel estimateur de l'indice de Hurst, appelé Increment Zero-Crossing Statistic (IZCS) qui est une variante de l'IRS. La combinaison des méthodes FDpV + IZCS constitue une procédure efficace et rapide avec une complexité linéaire en temps et en mémoire. / This Ph.D dissertation deals with "Off-line" detection of change points on parameters of time series of independent random variables, and in the Hurst parameter of multifrcational Brownian motion. It consists of three articles. In the first paper, published in Sequential Analysis, we set the cornerstones of the Filtered Derivative with p-Value method for the detection of change point on parameters of independent random variables. This method has linear time and memory complexities, with respect to the size of the series. It consists of two steps. The first step is based on Filtered Derivative method which detects the right change points as well as the false ones. We improve the Filtered Derivative method by adding a second step in which we compute the p-values associated to every single potential change point. Then we eliminate false alarms, i.e. the change points which have p-value smaller than a given critical level. We showed asymptotic properties needed for the calibration of the algorithm. The effectiveness of the method has been proved both on simulated data and on real data. Then we moved to the application of the method for the detection of change point on the Hurst parameter of multifractional Brownian motion. This was done in two phases. In the first phase, a paper is to be published in ESAIM P&S where we investigated the Central Limit Theorem of the Increment Ratio Statistic of a multifractional Brownian motion, leading to a CLT for the time varying Hurst index. The proofs are quite simple relying on Breuer-Major theorems and an original freezing of time strategy.The second phase relies on a new paper submitted for publication. We adapted the FDpV method to detect change points on the Hurst parameter of piecewise fractional Brownian motion. The underlying statistics of the FDpV technology is a new statistic estimator for Hurst index, so-called Increment Zero-Crossing Statistic (IZCS) which is a variation of IRS. Both FDpV and IZCS are methods with linear time and memory complexities, with respect to the size of the series.
77

Wind energy analysis and change point analysis / Analyse de l'énergie éolienne et analyse des points de changement

Haouas, Nabiha 28 February 2015 (has links)
L’énergie éolienne, l’une des énergies renouvelables les plus compétitives, est considérée comme une solution qui remédie aux inconvénients de l’énergie fossile. Pour une meilleure gestion et exploitation de cette énergie, des prévisions de sa production s’avèrent nécessaires. Les méthodes de prévisions utilisées dans la littérature permettent uniquement une prévision de la moyenne annuelle de cette production. Certains travaux récents proposent l’utilisation du Théorème Central Limite (TCL), sous des hypothèses non classiques, pour l’estimation de la production annuelle moyenne de l’énergie éolienne ainsi que sa variance pour une seule turbine. Nous proposons dans cette thèse une extension de ces travaux à un parc éolien par relaxation de l’hypothèse de stationnarité la vitesse du vent et la production d’énergie, en supposant que ces dernières sont saisonnières. Sous cette hypothèse la qualité de la prévision annuelle s’améliore considérablement. Nous proposons aussi de prévoir la production d’énergie éolienne au cours des quatre saisons de l’année. L’utilisation du modèle fractal, nous permet de trouver une division ”naturelle” de la série de la vitesse du vent afin d’affiner l’estimation de la production éolienne en détectant les points de ruptures. Dans les deux derniers chapitres, nous donnons des outils statistiques de la détection des points de ruptures et d’estimation des modèles fractals. / The wind energy, one of the most competitive renewable energies, is considered as a solution which remedies the inconveniences of the fossil energy. For a better management and an exploitation of this energy, forecasts of its production turn out to be necessary. The methods of forecasts used in the literature allow only a forecast of the annual mean of this production. Certain recent works propose the use of the Central Limit Theorem (CLT), under not classic hypotheses, for the estimation of the mean annual production of the wind energy as well as its variance for a single turbine. We propose in this thesis, an extension of these works in a wind farm by relaxation of the hypothesis of stationarity the wind speed and the power production, supposing that the latter are seasonal. Under this hypothesis the quality of the annual forecast improves considerably. We also suggest planning the wind power production during four seasons of the year. The use of the fractal model, allows us to find a "natural" division of the series of the wind speed to refine the estimation of the wind production by detecting abrupt change points. Statistical tools of the change points detection and the estimation of fractal models are presented in the last two chapters.
78

An empirical statistical model relating winds and ocean surface currents : implications for short-term current forecasts

Zelenke, Brian Christopher 02 December 2005 (has links)
Graduation date: 2006 / Presented on 2005-12-02 / An empirical statistical model is developed that relates the non-tidal motion of the ocean surface currents off the Oregon coast to forecasts of the coastal winds. The empirical statistical model is then used to produce predictions of the surface currents that are evaluated for their agreement with measured currents. Measurements of the ocean surface currents were made at 6 km resolution using Long-Range CODAR SeaSonde high-frequency (HF) surface current mappers and wind forecasts were provided at 12 km resolution by the North American Mesoscale (NAM) model. First, the response of the surface currents to wind-forcing measured by five coastal National Data Buoy Center (NDBC) stations was evaluated using empirical orthogonal function (EOF) analysis. A significant correlation of approximately 0.8 was found between the majority of the variability in the seasonal anomalies of the low-pass filtered surface currents and the seasonal anomalies of the low-pass filtered wind stress measurements. The U and the V components of the measured surface currents were both shown to be forced by the zonal and meridional components of the wind-stress at the NDBC stations. Next, the NAM wind forecasts were tested for agreement with the measurements of the wind at the NDBC stations. Significant correlations of around 0.8 for meridional wind stress and 0.6 for zonal wind stress were found between the seasonal anomalies of the low-pass filtered wind stress measured by the NDBC stations and the seasonal anomalies of the low-pass filtered wind stress forecast by the NAM model. Given the amount of the variance in the winds captured by the NAM model and the response of the ocean surface currents to both components of the wind, bilinear regressions were formed relating the seasonal anomalies of the low-pass filtered NAM forecasts to the seasonal anomalies of the low-pass filtered surface currents. The regressions turned NAM wind forecasts into predictions of the seasonal anomalies of the low-pass filtered surface currents. Calculations of the seasonal cycle in the surface currents, added to these predicted seasonal anomalies, produced a non-tidal estimation of the surface currents that allowed a residual difference to be calculated from recent surface current measurements. The sum of the seasonal anomalies, the seasonal cycle, and the residual formed a prediction of the non-tidal surface currents. The average error in this prediction of the surface currents off the Oregon coast remained less than 4 cm/s out through 48 hours into the future.
79

A hybrid les / lagrangian fdf method on adaptive, block-structured mesh / Metodo híbrido LES / FDF Lagrangiana em malha adaptativa, bloco-estruturada

Ferreira, Vitor Maciel Vilela 09 April 2015 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / Esta dissertação é parte de um amplo projeto de pesquisa, que visa ao desenvolvimento de uma plataforma computacional de dinâmica dos fluidos (CFD) capaz de simular a física de escoamentos que envolvem mistura de várias espécies químicas, com reação e combustão, utilizando um método hibrido Simulação de Grandes Escalas (LES) / Função Densidade Filtrada (FDF) Lagrangiana em malha adaptativa, bloco-estruturada. Uma vez que escoamentos com mistura proporcionam fenômenos que podem ser correlacionados com a combustão em escoamentos turbulentos, uma visão global da fenomenologia de mistura foi apresentada e escoamentos fechados, laminar e turbulento, que envolvem mistura de duas espécies químicas inicialmente segregadas foram simulados utilizando o código de desenvolvimento interno AMR3D e o código recentemente desenvolvido FDF Lagrangiana de composição. A primeira etapa deste trabalho consistiu na criação de um modelo computacional de partículas estocásticas em ambiente de processamento distribuído. Isto foi alcançado com a construção de um mapa Lagrangiano paralelo, que pode gerenciar diferentes tipos de elementos lagrangianos, incluindo partículas estocásticas, particulados, sensores e nós computacionais intrínsecos dos métodos Fronteira Imersa e Acompanhamento de Interface. O mapa conecta informações Lagrangianas com a plataforma Euleriana do código AMR3D, no qual equações de trans- porte são resolvidas. O método FDF Lagrangiana de composição realiza cálculos algébricos sobre partículas estocásticas e provê campos de composição estatisticamente equivalentes aos obtidos quando se utiliza o método de Diferenças Finitas para solução de equações diferenciais parciais; a técnica de Monte Carlo foi utilizada para resolver um sistema derivado de equações diferenciais estocásticas (SDE). Os resultados concordaram com os benchmarks, que são simulações baseadas em plataforma de Diferenças Finitas para solução de uma equação de transporte de composição filtrada. / This master thesis is part of a wide research project, which aims at developing a com- putational fluid dynamics (CFD) framework able to simulate the physics of multiple-species mixing flows, with chemical reaction and combustion, using a hybrid Large Eddy Simulation (LES) / Lagrangian Filtered Density Function (FDF) method on adaptive, block-structured mesh. Since mixing flows provide phenomena that may be correlated with combustion in turbulent flows, we expose an overview of mixing phenomenology and simulated enclosed, ini- tially segregated two-species mixing flows, at laminar and turbulent states, using the in-house built AMR3D and the developed Lagrangian composition FDF codes. The first step towards this objective consisted of building a computational model of notional particles transport on distributed processing environment. We achieved it constructing a parallel Lagrangian map, which can hold different types of Lagrangian elements, including notional particles, particu- lates, sensors and computational nodes intrinsic to Immersed Boundary and Front Tracking methods. The map connects Lagrangian information with the Eulerian framework of the AMR3D code, in which transport equations are solved. The Lagrangian composition FDF method performs algebraic calculations over an ensemble of notional particles and provides composition fields statistically equivalent to those obtained by Finite Differences numerical solution of partially differential equations (PDE); we applied the Monte Carlo technique to solve a derived system of stochastic differential equations (SDE). The results agreed with the benchmarks, which are simulations based on Finite Differences framework to solve a filtered composition transport equation. / Mestre em Engenharia Mecânica
80

Spécificités de l'implant électro-acoustique : indications, interface bioélectrique et stratégie de codage / Specificities of electric-acoustic stimulation : indications, bioelectrical interface and coding strategy

Seldran, Fabien 19 December 2011 (has links)
Le clinicien se trouve parfois confronté à des sujets qui présentent une surdité supérieure à 90 dB HL au-delà de 1 kHz avec une audition résiduelle dans les fréquences graves. Pour réhabiliter les hautes fréquences, il existe aujourd’hui différentes technologies : amplification conventionnelle, compression fréquentielle, implant cochléaire et depuis une dizaine d’année la stimulation électro-acoustique EAS qui consiste à stimuler acoustiquement les sons graves et électriquement les sons aigus via un implant cochléaire. La première partie de cette thèse a consisté à identifier les facteurs qui influencent les capacités des patients sourds partiels à traiter l’information basse fréquence de la parole. Nous avons utilisé un test d’audiométrie vocale filtrée passe-bas. Nos résultats indiquent que les scores d’intelligibilité de la parole sont positivement corrélés avec la durée de la surdité. Ceci signifie qu’avec le temps, ces sujets malentendants apprennent à comprendre avec cette audition type filtre passe-bas, à tel point que certains ont des performances supra-normales pour l’utilisation des basses fréquences. Nos résultats montrent également une corrélation négative entre l’âge d’apparition de la surdité et les scores l’intelligibilité. Ce test pourra aider le clinicien à mieux cibler l’appareillage le plus adapté à chaque profil de patient. La seconde partie de cette thèse, consacrée à l’EAS, a consisté à évaluer par des simulations chez le normo-entendant, diverses stratégies de codage du son par l’implant EAS. Actuellement, la stratégie utilisée pour l’EAS est calquée sur celle de l’implant cochléaire et nos résultats suggèrent que cette stratégie peut être optimisée. / Clinicians may face patients who have a deafness superior to 90 dB HL above 1 kHz with good lowfrequency residual hearing. Today, several technologies are available to provide high frequencies: conventional amplification, frequency compression, cochlear implant since about 10 years Electric-Acoustic Stimulation EAS which consists in stimulating acoustically low frequencies while stimulating electrically high frequency sounds via a cochlear implant. The firt part of this dissertation consisted in identifying the factors which may influence abilities of partially deaf subjects to process low-frequency speech information. We used a low-pass filtered speech test. Our results show that speech intelligibility scores are positively correlated to the duration of deafness. This means that these hearing-impaired subjects learn to understand with this lowpass-like hearing, in such a way that some of them exhibit supranormal abilities for the processing of low-frequency sounds. Our results also show a negative correlation between the age at onset of deafness and speech intelligibility scores. This test may help the clinician to better evaluate which device would be best for every patient’s profile. The second part of this dissertation, about EAS, consisted in evaluating through simulations in normal hearing listeners, several coding strategies by the EAS implant. Now the strategy used for EAS duplicates the strategy used by cochlear implants and our results suggest that this strategy could be optimized.

Page generated in 0.0627 seconds