• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 4
  • Tagged with
  • 21
  • 21
  • 18
  • 18
  • 9
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Smart matrix converter-based grid-connected photovoltaic system for mitigating current and voltage-related power quality issues on the low-voltage grid

Geury, Thomas 20 January 2017 (has links)
The increasing penetration of distributed energy resources, in particular Photovoltaic (PV) production units, and the ever-growing use of power electronics-based equipment has led to specific concern about Power Quality (PQ) in the Low-Voltage (LV) grid. These include high- and low-order current harmonics as well as voltage distortion at the point of common coupling. Solutions to overcome these issues, meeting international grid codes, are being proposed in the context of smart energy management schemes.This work proposes a novel three-phase topology for a PV system with enhanced PQ mitigation functionality, tackling the corresponding control challenges.First, a single-stage current-source inverter PV system with active filtering capability is preferred to the more common two-stage voltage-source inverter topology with additional voltage-step-up converter. The system also guarantees a nearly unitary displacement power factor in the connection to the grid and allows for Maximum Power Point Tracking (MPPT) with direct control of the PV array power. The grid-synchronised dq-axis grid current references are generated for the mitigation of nonlinear load low-order current harmonics, without the need for additional measurements. Active damping is used to minimise grid-side filter losses and reduce high-order harmonics resulting from the converter switching.Results on a 500W laboratory prototype confirm that active damping reduces the switching harmonics in the grid currents and active filtering properly mitigates the low-order current harmonics. The MPPT algorithm works effectively for various irradiance variations. Second, a PV system with a novel Indirect Matrix Converter (IMC)-based unified power quality conditioner topology is developed for enhanced current and voltage compensation capability, with compactness and reliability advantages. PQ issues such as current harmonics, and voltage sags, swells, undervoltage and overvoltage are mitigated by the shunt and series converters, respectively.The more common Space Vector Modulation (SVM) method used in IMCs is developed for this specific topology. In particular, a new shunt converter modulation method is proposed to additionally control the PV array current with zero switching vectors, resulting in a specific switching sequence.A direct sliding mode control method is also studied separately for the shunt and series converters, so that the zero-vector modulation method of the shunt converter can be used, with no sensitive synchronisation of the switching signals; this contrasts with the SVM method. A new dc link voltage modulation method with 12 voltage zones, instead of 6, is proposed to help overcome the limitation in the choice of shunt converter switching vectors due to the positive dc link voltage constraint.Results are obtained for the direct method on a 1 kW laboratory prototype with optimised IMC dc link connection and alternative shunt converter switching transitions to guarantee a positive dc link voltage. Current and voltage compensation capabilities are confirmed by tests in various operating conditions. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
2

Conception et validation d'une plate-forme d'électrostimulation et de mesure d'EMG pour le suivi de la curarisation en anesthésie

De Bel, Maxime 17 December 2010 (has links)
Le monitorage de la transmission neuromusculaire consiste à évaluer la contraction d’un muscle en réponse à des trains d’impulsions électriques délivrés sur son nerf moteur. Le curare ayant pour effet de bloquer la communication entre le nerf et le muscle, les réponses musculaires seront maximales avant injection de la drogue et s’atténueront au fur et à mesure du blocage neuromusculaire, jusqu’à atteindre une paralysie complète.<p><p>Au regard des principales limitations des moniteurs de curare actuels et des avantages qu'apporte le monitoring sur une variété de muscles, le monitorage par EMG semble être la meilleure alternative. Ce sont essentiellement les insuffisances dans la détection et la gestion des artefacts et des perturbations électriques qui rendent les moniteurs EMG actuels insuffisamment fiables et mal adaptés à l’usage clinique. <p><p>L'objectif de ce travail est dès lors d'identifier les causes de perturabations et apporter les remèdes nécessaires pour faire de l'EMG une méthode de confiance à la fois pour la recherche et la pratique quotidienne. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
3

Optimization of reciprocating compressor maintenance based on performance deterioration study

Vansnick, Michel P.D.G. 21 December 2006 (has links)
Critical equipment plays an essential role in industry because of its lack of redundancy. Failure of critical equipment results in a major economic burden that will affect the profit of the enterprise. Lack of redundancy for critical equipment occurs because of the high cost of the equipment usually combined with its high reliability. <p><p>When we are analyzing the reliability of such equipment, as a result, there are few opportunities to crash a few pieces of equipment to actually verify component life. <p><p>Reliability is the probability that an item can perform its intended function for a specified interval of time under stated conditions and achieve low long-term cost of ownership for the system considering cost alternatives. From the economical standpoint, the overriding reliability issue is cost, particularly the cost of unreliability of existing equipment caused by failures. <p><p>Classical questions about reliability are:<p><p>·\ / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
4

New strategies of acquisition and processing of encephalographic biopotentials

Nonclercq, Antoine 04 June 2007 (has links) (PDF)
Electroencephalography is a medical diagnosis technique. It consists in measuring the biopotentials produced by the upper layers of the brain at various standardized places on the skull.<p><p>Since the biopotentials produced by the upper parts of the brain have an amplitude of about one microvolt, the measurements performed by an EEG are exposed to many risks.<p><p>Moreover, since the present tendency is measure those signals over periods of several hours, or even several days, human analysis of the recording becomes extremely long and difficult. The use of signal analysis techniques for the help of paroxysm detection with clinical interest within the electroencephalogram becomes therefore almost essential. However the performance of many automatic detection algorithms becomes significantly degraded by the presence of interference: the quality of the recordings is therefore fundamental. <p><p>This thesis explores the benefits that electronics and signal processing could bring to electroencephalography, aiming at improving the signal quality and semi-automating the data processing.<p><p>These two aspects are interdependent because the performance of any semi-automation of the data processing depends on the quality of the acquired signal. Special attention is focused on the interaction between these two goals and attaining the optimal hardware/software pair. <p><p>This thesis offers an overview of the medical electroencephalographic acquisition chain and also of its possible improvements.<p> <p>The conclusions of this work may be extended to some other cases of biological signal amplification such as the electrocardiogram (ECG) and the electromyogram (EMG). Moreover, such a generalization would be easier, because their signals have a wider amplitude and are therefore more resistant toward interference.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
5

Synthèse des voix pathologiques

Fraj, Samia 18 March 2010 (has links)
L’objectif de la thèse est le développement et la validation d’un synthétiseur des voix pathologiques. Peu d’études ont été consacrées à la synthèse des voix avec des dyspériodicités vocales malgré les nombreux arguments en faveur du développement et de l’amélioration des simulateurs des voix dysphoniques. Dans le cadre de ce travail, nous avons mis en œuvre un synthétiseur permettant de contrôler l’excitation glottique d’une manière fine et par conséquent de simuler efficacement les différentes catégories des dysphonies. <p>Les troubles simulés sont la gigue vocale, le tremblement vocal, la biphonation, la diplophonie et les vibrations aléatoires. Le shimmy vocal résulte de la distorsion de modulation dans le conduit vocal qui transforme la gigue en shimmy vocal. Le souffle est synthétisé par la modulation d’un bruit Brownien.<p>Des expériences préliminaires ont montré la capacité du synthétiseur à produire différentes catégories de voyelles. Pour la validation, nous avons utilisé des modèles de troubles simulés. Les résultats des expériences d’évaluations perceptives, portant sur des corpus de stimuli synthétiques ou humains, modales ou dysphoniques, sont encourageants et montrent la capacité du synthétiseur à produire des voix aussi bien modales que troublées avec des timbres indiscernables des humains. Enfin, les résultats d’une expérience d’exploitation concernant la classification des stimuli synthétiques selon les échelles ordinales GRB suggèrent que troubles simulés et évaluations perceptives concordent. Aussi, les scores perceptifs prédits à partir des paramètres de contrôle du synthétiseur et les scores attribués par des experts sont fortement corrélés. <p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
6

Numerical study of coherent structures within a legacy LES code and development of a new parallel frame work for their computation

Giammanco, Raimondo 22 December 2005 (has links)
The understanding of the physics of the Coherent Structures and their interaction with the remaining fluid motions is of paramount interest in Turbulence Research. <p>Indeed, recently had been suggested that separating and understanding the the different physical behavior of Coherent Structures and "uncoherent" background might very well be the key to understand and predict Turbulence. Available understanding of Coherent Structures shows that their size is considerably larger than the turbulent macro-scale, making permissible the application of Large Eddy Simulation to their simulation and study, with the advantage to be able to study their behavior at higher Re and more complex geometry than a Direct Numerical Simulation would normally allow. Original purpose of the present work was therefore the validation of the use of Large Eddy Simulation for the study of Coherent Structures in Shear-Layer and the its application to different flow cases to study the effect of the flow topology on the Coherent Structures nature.<p>However, during the investigation of the presence of Coherent Structures in numerically generated LES flow fields, the aging in house Large Eddy Simulation (LES) code of the Environmental & Applied Fluid Dynamics Department has shown a series of limitations and shortcomings that led to the decision of relegating it to the status of Legacy Code (from now on indicated as VKI LES legacy code and of discontinuing its development. A new natively parallel LES solver has then been developed in the VKI Environmental & Applied Fluid Dynamics Department, where all the shortcomings of the legacy code have been addressed and modern software technologies have been adopted both for the solver and the surrounding infrastructure, delivering a complete framework based exclusively on Free and Open Source Software (FOSS ) to maximize portability and avoid any dependency from commercial products. The new parallel LES solver retains some basic characteristics of the old legacy code to provide continuity with the past (Finite Differences, Staggered Grid arrangement, Multi Domain technique, grid conformity across domains), but improve in almost all the remaining aspects: the flow can now have all the three directions of inhomogeneity, against the only two of the past, the pressure equation can be solved using a three point stencil for improved accuracy, and the viscous terms and convective terms can be computed using the Computer Algebra System Maxima, to derive discretized formulas in an automatic way.<p>For the convective terms, High Resolution Central Schemes have been adapted to the three-dimensional Staggered Grid Arrangement from a collocated bi-dimensional one, and a system of Master-Slave simulations has been developed to run in parallel a Slave simulation (on 1 Processing Element) for generating the inlet data for the Master simulation (n - 1 Processing Elements). The code can perform Automatic Run-Time Load Balancing, Domain Auto-Partitioning, has embedded documentation (doxygen), has a CVS repository (version managing) for ease of use of new and old developers.<p>As part of the new Frame Work, a set of Visual Programs have been provided for IBM Open Data eXplorer (OpenDX), a powerful FOSS Flow visualization and analysis tool, aimed as a replacement for the commercial TecplotTM, and a bug tracking mechanism via Bugzilla and cooperative forum resources (phpBB) for developers and users alike. The new M.i.O.m.a. (MiOma) Solver is ready to be used again for Coherent Structures analysis in the near future. / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
7

Development and validation of NESSIE: a multi-criteria performance estimation tool for SoC / Développement et validation de NESSIE: un outil d'estimation de performances multi-critères pour systèmes-sur-puce.

Richard, Aliénor 18 November 2010 (has links)
The work presented in this thesis aims at validating an original multicriteria performances estimation tool, NESSIE, dedicated to the prediction of performances to accelerate the design of electronic embedded systems. <p><p>This tool has been developed in a previous thesis to cope with the limitations of existing design tools and offers a new solution to face the growing complexity of the current applications and electronic platforms and the multiple constraints they are subjected to. <p><p>More precisely, the goal of the tool is to propose a flexible framework targeting embedded systems in a generic way and enable a fast exploration of the design space based on the estimation of user-defined criteria and a joint hierarchical representation of the application and the platform.<p><p>In this context, the purpose of the thesis is to put the original framework NESSIE to the test to analyze if it is indeed useful and able to solve current design problems. Hence, the dissertation presents :<p><p>- A study of the State-of-the-Art related to the existing design tools. I propose a classification of these tools and compare them based on typical criteria. This substantial survey completes the State-of-the-Art done in the previous work. This study shows that the NESSIE framework offers solutions to the limitations of these tools.<p>- The framework of our original mapping tool and its calculation engine. Through this presentation, I highlight the main ingredients of the tool and explain the implemented methodology.<p>- Two external case studies that have been chosen to validate NESSIE and that are the core of the thesis. These case studies propose two different design problems (a reconfigurable processor, ADRES, applied to a matrix multiplication kernel and a 3D stacking MPSoC problem applied to a video decoder) and show the ability of our tool to target different applications and platforms. <p><p>The validation is performed based on the comparison of a multi-criteria estimation of the performances for a significant amount of solutions, between NESSIE and the external design flow. In particular, I discuss the prediction capability of NESSIE and the accuracy of the estimation. <p><p>-The study is completed, for each case study, by a quantification of the modeling time and the design time in both flows, in order to analyze the gain achieved by our tool used upstream from the classical tool chain compared to the existing design flow alone. <p><p><p>The results showed that NESSIE is able to predict with a high degree of accuracy the solutions that are the best candidates for the design in the lower design flows. Moreover, in both case studies, modeled respectively at a low and higher abstraction level, I obtained a significant gain in the design time. <p><p>However, I also identified limitations that impact the modeling time and could prevent an efficient use of the tool for more complex problems. <p><p>To cope with these issues, I end up by proposing several improvements of the framework and give perspectives to further develop the tool. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
8

Developing multi-criteria performance estimation tools for Systems-on-chip

Vander Biest, Alexis 23 March 2009 (has links)
The work presented in this thesis targets the analysis and implementation of multi-criteria performance prediction methods for System-on-Chips (SoC).<p>These new SoC architectures offer the opportunity to integrate complete heterogeneous systems into a single chip and can be used to design battery powered handhelds, security critical systems, consumer electronics devices, etc. However, this variety in terms of application usually comes with a lot of different performance objectives like power consumption, yield, design cost, production cost, silicon area and many others. These performance requirements are often very difficult to meet together so that SoC design usually relies on making the right design choices and finding the best performance compromises.<p>In parallel with this architectural paradigm shift, new Very Deep Submicron (VDSM) silicon processes have more and more impact on the performances and deeply modify the way a VLSI system is designed even at the first stages of a design flow.<p>In such a context where many new technological and system related variables enter the game, early exploration of the impact of design choices becomes crucial to estimate the performance of the system to design and reduce its time-to-market.<p>In this context, this thesis presents: <p>- A study of state-of-the-art tools and methods used to estimate the performances of VLSI systems and an original classification based on several features and concepts that they use. Based on this comparison, we highlight their weaknesses and lacks to identify new opportunities in performance prediction.<p>- The definition of new concepts to enable the automatic exploration of large design spaces based on flexible performance criteria and degrees of freedom representing design choices.<p>- The implementation of a couple of two new tools of our own:<p>- Nessie, a tool enabling hierarchical representation of an application along with its platform and automatically performs the mapping and the estimation of their performance.<p>-Yeti, a C++ library enabling the defintion and value estimation of closed-formed expressions and table-based relations. It provides the user with input and model sensitivity analysis capability, simulation scripting, run-time building and automatic plotting of the results. Additionally, Yeti can work in standalone mode to provide the user with an independent framework for model estimation and analysis.<p><p>To demonstrate the use and interest of these tools, we provide in this thesis several case studies whose results are discussed and compared with the literature.<p>Using Yeti, we successfully reproduced the results of a model estimating multi-core computation power and extended them thanks to the representation flexibility of our tool.<p>We also built several models from the ground up to help the dimensioning of interconnect links and clock frequency optimization.<p>Thanks to Nessie, we were able to reproduce the NoC power consumption results of an H.264/AVC decoding application running on a multicore platform. These results were then extended to the case of a 3D die stacked architecture and the performance benefits are then discussed.<p>We end up by highlighting the advantages of our technique and discuss future opportunities for performance prediction tools to explore. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
9

Contribution au dimensionnement d'une liaison radio sur le corps humain :études canal et antenne à 60 GHz

Razafimahatratra, Solofo 14 November 2017 (has links) (PDF)
The band around 60 GHz is interesting for BAN applications mainly for lowerinterference than at microwave frequencies, wide available band adapted to On-Off Keying(OOK) modulation for low energy consumption and low data rate communication (under10 Mbps), antenna miniaturization. Nevertheless, due to high attenuation at this frequency,the design of a reliable and energy-effective communications for BANs requires a detailedanalysis of the body channel. A planar and compact SIW horn antenna was designed and usedfor body channel measurements at 60 GHz. The main contribution in the antenna design is thebandwidth enhancement covering the whole available band around 60 GHz compared to thesame antenna type available at this frequency. The on-body measurements with this antennashow that short-distance and LOS (Line Of Sight) links are possible at 60 GHz. The bodydynamic is taken into account by statistical off-body channel measurements. For the firsttime, measurements are done for the same scenarios at 60 GHz and another frequency in theUltra WideBand suitable with OOK impulse radio modulation. By taking into accounttransmission power standards and low power consumption receivers sensitivity in theliterature, the potentiality of 60 GHz for BAN is shown with an outage probability lower than8 % whereas this parameter is lower than 15 % at 4 GHz. When characterizing antenna onbody, difficulties arise for antenna de-embedding due to the antenna-body coupling. In fact,the antenna gain depends on transmitter-receiver distance on body. For the first time, aformulation of the vertical dipole gain on body is given. Also a new theoretical approachbased on the complex images method is proposed to compare two types of canonical antennaradiating on body. A vertical dipole and different rectangular apertures are normalizedthrough their input impedance with the same accepted power. The aperture input impedanceformulation has been developed during this study. The aperture efficiencies are 10% higherwhen antennas are at a height lower than 3 mm above the body phantom. The received powerincreases with the antenna size only for phantom direct touch, the difference among antennasis lower than 4 dB for the considered antennas limited with a monomode configuration. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
10

Analyse du tremblement vocal et application à des locuteurs parkinsoniens / Analysis of vocal tremor and application to parkinsonian speakers

Cnockaert, Laurence 13 February 2008 (has links)
L'analyse quantitative de la parole est pratiquée couramment en milieu clinique. Il s'agit d'un moyen d'évaluation non-invasif en vue de la documentation quantitative de la qualité de voix, et de son suivi au cours du temps. En milieu clinique, les méthodes d'analyse de signaux de parole doivent être fiables pour traiter des signaux de parole de locuteurs dysphoniques et âgés. De plus, les résultats de ces analyses doivent pouvoir se résumer à un faible nombre d'indices acoustiques pertinents et interprétables par les cliniciens.<p><p><p>Dans le cadre de cette thèse, nous nous sommes intéressés à la caractérisation des modulations basse-fréquence du signal de parole, et à son application à des locuteurs atteints de la maladie de Parkinson et à des locuteurs normophoniques. <p><p>Nous avons étudié d'une part l'estimation des modulations de la fréquence phonatoire, qui est la fréquence fondamentale du signal de parole. D'autre part, nous avons examiné les méthodes de caractérisation des modulations des fréquences des formants, qui sont les effets des résonances du conduit vocal dans le signal de parole. Nous avons développé des méthodes basées sur des transformées en ondelettes continues pour analyser ces modulations. Nous nous sommes également intéressés à l'application de méthodes d'estimation d'un conduit vocal acoustiquement équivalent à partir du signal de parole. <p><p>Nous avons appliqué ces méthodes à des signaux de parole de trois corpora. Le premier corpus est composé de locuteurs atteints de la maladie de Parkinson et de locuteurs normophoniques, le deuxième de locuteurs parkinsoniens enregistrés dans deux états pharmacologiques, et le troisième de locuteurs parkinsoniens enregistrés avant et après une thérapie vocale. Des analyses statistiques ont montré des différences significatives entre les indices de modulation en fonction de l'état de santé, en fonction de l'état pharmacologique, et au cours de la thérapie vocale. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished

Page generated in 0.468 seconds