• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 69
  • 42
  • 33
  • 23
  • 13
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 449
  • 449
  • 89
  • 88
  • 67
  • 63
  • 63
  • 46
  • 43
  • 40
  • 35
  • 31
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Otimização de processos de obtenção de nanoemulsões contendo óleo de oliva: homogeneização a alta pressão e emulsificação empregando fase D / Processes optimization of nanoemulsion preparation containing olive oil: high pressure homogenization and D-phase emulsification

Yukuyama, Megumi Nishitani 11 December 2017 (has links)
O óleo de oliva apresenta elevado potencial de aplicação no desenvolvimento de produtos farmacêuticos e cosméticos. No campo farmacêutico, esse óleo tem sido utilizado para a preparação de nanossistemas de liberação de fármacos que apresentam baixa solubilidade em água. Tais nanossistemas, incluindo as nanoemulsões, podem ser obtidos empregando processos de alta (mecânico) e baixa energia (físico-químico). A proposta do presente estudo foi adquirir maior entendimento dos processos de preparação e da aplicação de nanoemulsão por homogeneização a alta pressão (HAP) e emulsificação empregando fase D (EFD), respectivamente métodos de alta e baixa energia. O conhecimento adquirido pela sistematização da pesquisa e das atividades executadas resultou em dois artigos de revisão e dois artigos de pesquisa. O primeiro artigo de revisão permitiu identificar tendo em vista o sucesso do desenvolvimento de nanoemulsão, a necessidade de conhecimento da interação entre o fármaco e os componentes da nanoemulsão; do impacto do processo de preparação nos componentes e na estabilidade da nanoemulsão; e da influência da formulação na liberação e na absorção do fármaco por diferentes rotas de administração. Além disso, apresentamos as diferentes nanoemulsões, considerando o tipo de tensoativo utilizado para sua preparação e seu respectivo uso. O segundo artigo de revisão evidenciou a adequada seleção de processo como o fator essencial para assegurar a obtenção de nanoemulsão com propriedades pré-determinadas, oferecendo aos pesquisadores alternativa realista para sua produção em escala industrial. Com referência aos artigos de pesquisa, o primeiro comprovou a possibilidade de obtenção de nanoemulsão contendo elevado teor de óleo vegetal (40% m/m) utilizando único tensoativo em 2 concentração de 2% m/m. Esse desempenho considerado como desafio nos processos convencionais de baixa energia, foi realizado com sucesso utilizando o processo EFD. O poliol foi confirmado como variável estatisticamente significativa para a redução do diâmetro hidrodinâmico médio (DHM) da nanoemulsão, influenciando de maneira sinérgica o comportamento do tensoativo durante a fase de transição nesse método. O estudo referente ao segundo artigo permitiu a identificação e o entendimento da relação entre as variáveis de composição e de processo e o DHM, no desenvolvimento de nanoemulsão contendo óleo de oliva, empregando os processos HAP e EFD. A abordagem estatística revelou os intervalos ótimos das variáveis de composição e de processo para a obtenção do DHM desejado, o atributo crítico de qualidade. Assim, foi adquirido maior entendimento dos respectivos processos. Adicionalmente, o conceito do espaço de design e a otimização por meio da função desejo permitu a obtenção de nanoemulsão de 275 nm com sucesso, empregando a mesma composição para ambos os processos: HAP e EFD. Os conhecimentos adquiridos nesses estudos poderão direcionar o desenvolvimento de nanoemulsão inovadora, com êxito. / Olive oil has an extensive applicability in the development of pharmaceuticals and cosmetics. In the pharmaceutical field, this oil has been used for the preparation of poorly water-soluble drug delivery nanosystems. Such nanosystems, including nanoemulsions, can be obtained by employing high (mechanical) and low energy (physicochemical) processes. The aim of the present study was to acquire the deep understanding of the processes of preparation and applicability of nanoemulsion by high-pressure homogenization (HPH) and D-phase emulsification (DPE), respectively high- and low-energy methods. The knowledge acquired through the careful systematization of the research and the activities performed resulted in two review articles and two research articles. The first review article allowed to identify, targeting development of a high efficacy nanoemulsion, the need to understand the interaction between the drug and nanoemulsion components; the impact of the preparation process on the components and the nanoemulsion stability; and the influence of the formulation on the release and uptake of the drug by different administration routes. In addition, we described the development of different nanoemulsions, according to the selected surfactant for their preparations, and their respective application. The second review article evidenced the appropriate process selection as the key factor to ensure obtaining nanoemulsions with desired properties, offering to researchers, a realistic industrial scale alternatives for the development of nanoemulsions. Regarding the research articles, the first one confirmed the possibility of obtaining nanoemulsions containing high vegetable oil content (40% w/w) using a single surfactant at a concentration of 2% w/w. This challenge, considered as a challenge in conventional low energy processes, was successfully accomplished using the DPE process. The polyol was confirmed to be a statistical significate variable for the reduction of the nanoemulsion mean particle size (MPD), by synergistically influencing the behavior of the surfactant during the transition phase in this method. The second research article allowed identifying and understanding the relationship between composition and process variables and MPS in the development of olive oil nanoemulsion, using HPH and DPE processes. The statistical approach allowed the identification of optimal ranges of composition and process variables for obtaining the desired MPS, which is the critical quality attribute. Therefore, a better understanding of the respective processes was acquired. In addition, the design space concept and the optimization by means of the desire function allowed obtaining a 275 nm nanoemulsion with success, using the same compositions for both HPH and the DPE processes. The knowledge acquired in these studies will successfully allow directions for the future development of innovative nanoemulsion.
352

Ultrafast low-energy electron diffraction at surfaces / Probing transitions and phase-ordering of charge-density waves

Vogelgesang, Simon 05 December 2018 (has links)
No description available.
353

Análise e restauração de vídeos de Microscopia Eletrônica de Baixa Energia / Analysis and video restoration of Low Energy Electron Microscopy

Contato, Welinton Andrey 11 October 2016 (has links)
A Microscopia Eletrônica de Baixa Energia (LEEM) é uma recente e poderosa modalidade para o estudo de superfície passível de uma grande quantidade de degradações, como ruídos e borramento. Ainda incipiente na literatura, este trabalho visou a análise e identificação das fontes de degradações presentes em vídeos, além da utilização de um conjunto de técnicas de remoção de ruído e borramento para a restauração de dados LEEM. Além disso, foram desenvolvidas duas novas técnicas de filtragem de vídeo como intuito de preservar detalhes pequenos e texturas presentes. Na etapa de análise foi constatado que as imagens LEEM possuem uma grande quantidade e variedade de ruídos, sendo o Gaussiano o mais preponderante. Foi também estimada a Função de Espalhamento de Ponto (PSF) do microscópio utilizado, visando o emprego de técnicas de redução de borramento. Este trabalho também analisou a combinação de técnicas de redução de borramento com as técnicas de filtragem do ruído Gaussiano existente. Foi constatado que as técnicas não locais, como Non-Local Means (NLM) eBlock-Matching 3-D (BM3D), proveem uma maior capacidade de filtragem das imagens LEEM, preservando descontinuidades. Ainda nesta análise, identificou-se que algumas técnicas de redução de borramento não são efetivas em imagens LEEM, exceto a técnica Richardson-Lucy (RL) que suprimiu grande parte do borramento sem adicionar mais degradação. A indesejável remoção de pequenas estruturas e texturas pelas técnicas de filtragem existentes motivou o desenvolvimento de duas novas técnicas de filtragem de ruído Gaussiano (NLM3D-LBP-MSB eNLM3D-LBP-Adaptive) que mostraram resultados superiores para filtragem de imagens com grande quantidade de textura. Porém, em imagens com muitas regiões homogêneas o BM3D foi superior. Avaliações quantitativas foram realizadas sobre imagens artificiais. Em imagens LEEM reais, realizou-se um experimento qualitativo em que observadores avaliaram visualmente o resultado de restaurações por diversas técnicas existentes e as propostas neste trabalho. O experimento comprovou que os métodos de filtragem não locais foram superiores, principalmente quando combinados com o método RL. Os métodos propostos produziram bons resultados, entretanto, inferiores aos exibidos pelas técnicas NLM eBM3D. Este trabalho demonstrou que as técnicas de filtragem não locais são as mais adequadas para dados LEEM. Além disso, a técnica RL mostrou-se eficaz na redução de borramento. / Low Energy Electronic Microscopy (LEEM) is a recent and powerful surface science image modality prone to considerable amounts of degradations, such as noise and blurring. Still not fully addressed in the literature, this worked aimed at analysing and identifying the sources of degradation in LEEM videos, as well as the adequacy of existing noise reduction and deblurring techniques for LEEM data. This work also presented two new noise reduction techniques aimed at preserving texture and small details. Our analysis has revealed that LEEM images exhibit a large amount and variety of noises, with Gaussian noise being the most frequent. To handle the deblurring issue, the Point Spread Function (PSF) for the microscopeused in the experiments has also been estimated. This work has also studied the combination of deblurring and denoising techniques for Gaussian noise. Results have shown that non-local techniques such as Non-Local Means (NLM) and Block-Matching 3-D (BM3D) are more adequate for filtering LEEM images, while preserving discontinuities. We have also shown that some deblurring techniques are not suitable for LEEM images, except the RichardsonLucy (RL) approach which coped with most of the blur without the addition of extra degradation. The undesirable removal of small structures and texture by the existing denoising techniques encouraged the development of two novel Gaussian denoising techniques (NLM3D-LBP-MSB and NLM3D-LBP-Adaptive) which exhibited good results for images with a large amount of texture. However, BM3D was superior for images with large homogeneous regions. Quantitative experiments have been carried out for synthetic images. For real LEEM images, a qualitative analysis has been conducted in which observers visually assessed restoration results for existing techniques and also the two proposed ones. This experiment has shown that non-local denoising methodswere superior, especially when combined with theRL method. The proposed methods produced good results, but were out performed by NLM and BM3D. This work has shown that non-local denoising techniques are more adequate for LEEM data. Also, theRL technique is very efficient for deblurring purposes.
354

Fiabilité des outils de prévision du comportement des systèmes thermiques complexes

Merheb, Rania 04 December 2013 (has links)
La conception des bâtiments à faible consommation d’énergie est devenue un enjeu très important dans le but de réduire au maximum la consommation d’énergie et les émissions de gaz à effet de serre associées. Pour y arriver, il est indispensable de connaître les sources potentielles de biais et d’incertitude dans le domaine de la modélisation thermique des bâtiments d’un part, et de les caractériser et les évaluer d’autre part.Pour répondre aux exigences courantes en termes de fiabilité des prévisions du comportement thermique des bâtiments, nous avons essayé dans le cadre de cette thèse de quantifier les incertitudes liés à des paramètres influents, de proposer une technique de diagnostic de l’enveloppe, propager les incertitudes via une méthode ensembliste sur un modèle simplifié et puis proposer une démarche permettant d’identifier les paramètres de modélisation les plus influents et d’évaluer leur effet sur les performances énergétiques avec le moindre coût en termes de simulations. / Designing buildings with low-energy consumption has become a very important issue in order to minimize energy consumption and the emissions of associated greenhouse gas. To achieve this, it is essential to know the potential sources of bias and uncertainty in the field of buildings thermal modeling and to characterize and evaluate them.To meet the current requirements in terms of reliable predictions of buildings thermal behavior, we have tried in this thesis, to quantify uncertainties associated to influential parameters, to propose a technique for diagnosing the building’s envelope, propagate uncertainties via a set-method for the case of a simplified model. We finally proposed an approach to identify the most influential modeling parameters to evaluate their impact on energy performance.
355

Modélisation dynamique des apports thermiques dus aux appareils électriques en vue d'une meilleure gestion de l'énergie au sein de bâtiments à basse consommation / Dynamic Thermal Modeling of Electrical Appliances for Energy Management of Low Energy Buildings

Park, Herie 15 May 2013 (has links)
Ce travail propose un modèle thermique dynamique des appareils électriques dans les bâtiments basse consommation. L'objectif de ce travail est d'étudier l'influence des gains thermiques de ces appareils sur le bâtiment. Cette étude insiste sur la nécessité d'établir un modèle thermique dynamique des appareils électriques pour une meilleure gestion de l'énergie du bâtiment et le confort thermique de ses habitants.Comme il existe des interactions thermiques entre le bâtiment et les appareils électriques, sources de chaleur internes au bâtiment, il est nécessaire de modéliser le bâtiment. Le bâtiment basse consommation est modélisé dans un premier temps par un modèle simple reposantl'étude d'une pièce quasi-adiabatique. Ensuite, dans le but d'établir le modèle des appareils électriques, ceux-ci sont classés en quatre catégories selon leurs propriétés thermiques et électriques. A partir de cette classification et du premier principe de la thermodynamique, un modèle physique générique est établi. Le protocole expérimental et la procédure d'identification des paramètres thermiques des appareils sont ensuite présentés. Afin d'analyser la pertinence du modèle générique appliqué à des cas pratiques, plusieurs appareils électriques utilisés fréquemment dans les résidences – un écran, un ordinateur, un réfrigérateur, un radiateur électrique à convection et un micro-onde – sont choisis pour étudier et valider ce modèle ainsi que les protocoles d'expérimentation et d'identification. Enfin, le modèle proposé est intégré dans le modèle d'un bâtiment résidentiel développé et validé par le CSTB. Ce modèle couplé des appareils et du bâtiment est implémenté dans SIMBAD, un outil de simulation du bâtiment. A travers cette simulation, le comportement thermique du bâtiment et la quantité d'énergie nécessaire à son chauffage sur une période hivernale, ainsi que l'inconfort thermique dû aux appareils électriques durant l'été, sont observés.Ce travail fournit des résultats quantitatifs de l'effet thermique de différents appareils électriques caractérisés dans un bâtiment basse consommation et permet d'observer leur dynamique thermique et leurs interactions. Finalement, cette étude apporte une contribution aux études de gestion de l'énergie des bâtiments à basse consommation énergétique et du confort thermique des habitants. / This work proposes a dynamic thermal model of electrical appliances within low energy buildings. It aims to evaluate the influence of thermal gains of these appliances on the buildings and persuades the necessity of dynamic thermal modeling of electrical appliances for the energy management of low energy buildings and the thermal comfort of inhabitants.Since electrical appliances are one of the free internal heat sources of a building, the building which thermally interact with the appliances has to be modeled. Accordingly, a test room which represents a small scale laboratory set-up of a low energy building is first modeled based on the first thermodynamics principle and the thermal-electrical analogy. Then, in order to establish the thermal modeling of electrical appliances, the appliances are classified into four categories from thermal and electrical points of view. After that, a generic physically driven thermal model of the appliances is derived. It is established based also on the first thermodynamics principle. Along with this modeling, the used experimental protocol and the used identification procedure are presented to estimate the thermal parameters of the appliances. In order to analyze the relevance of the proposed generic model applied to practical cases, several electrical appliances which are widely used in residential buildings, namely a monitor, a computer, a refrigerator, a portable electric convection heater, and microwave are chosen to study and validate the proposed generic model and the measurement and identification protocols. Finally, the proposed dynamic thermal model of electrical appliances is integrated into a residential building model which was developed and validated by the French Technical Research Center for Building (CSTB) on a real building. This coupled model of the appliances and the building is implemented in a building energy simulation tool SIMBAD, which is a specific toolbox of Matlab/Simulink®. Through the simulation, thermal behavior and heating energy use of the building are observed during a winter period. In addition, thermal discomfort owing to usages of electrical appliances during a summer period is also studied and quantified.This work therefore provides the quantitative results of thermal effect of differently characterized electrical appliances within a low energy building and leads to observe their thermal dynamics and interactions. Consequently, it permits the energy management of low energy buildings and the thermal comfort of inhabitants in accordance with the usages of electrical appliances.
356

Methodology to estimate building energy consumption using artificial intelligence / Méthodologie pour estimer la consommation d’énergie dans les bâtiments en utilisant des techniques d’intelligence artificielle

Paudel, Subodh 22 September 2016 (has links)
Les normes de construction pour des bâtiments de plus en plus économes en énergie (BBC) nécessitent une attention particulière. Ces normes reposent sur l’amélioration des performances thermiques de l’enveloppe du bâtiment associé à un effet capacitif des murs augmentant la constante de temps du bâtiment. La prévision de la demande en énergie de bâtiments BBC est plutôt complexe. Ce travail aborde cette question par la mise en œuvre d’intelligence artificielle(IA). Deux approches de mise en œuvre ont été proposées : « all data » et « relevant data ». L’approche « all data » utilise la totalité de la base de données. L’approche « relevant data » consiste à extraire de la base de données un jeu de données représentant le mieux possible les prévisions météorologiques en incluant les phénomènes inertiels. Pour cette extraction, quatre modes de sélection ont été étudiés : le degré jour (HDD), une modification du degré jour (mHDD) et des techniques de reconnaissance de chemin : distance de Fréchet (FD) et déformation temporelle dynamique (DTW). Quatre techniques IA sont mises en œuvre : réseau de neurones (ANN), machine à support de vecteurs (SVM), arbre de décision (DT) et technique de forêt aléatoire (RF). Dans un premier temps, six bâtiments ont été numériquement simulés (de consommation entre 86 kWh/m².an à 25 kWh/m².an) : l’approche « relevant data » reposant sur le couple (DTW, SVM) donne les prévisions avec le moins d’erreur. L’approche « relevant data » (DTW, SVM) sur les mesures du bâtiment de l’Ecole des Mines de Nantes reste performante. / High-energy efficiency building standards (as Low energy building LEB) to improve building consumption have drawn significant attention. Building standards is basically focused on improving thermal performance of envelope and high heat capacity thus creating a higher thermal inertia. However, LEB concept introduces alarge time constant as well as large heat capacity resulting in a slower rate of heat transfer between interior of building and outdoor environment. Therefore, it is challenging to estimate and predict thermal energy demand for such LEBs. This work focuses on artificial intelligence (AI) models to predict energy consumptionof LEBs. We consider two kinds of AI modeling approaches: “all data” and “relevant data”. The “all data” uses all available data and “relevant data” uses a small representative day dataset and addresses the complexity of building non-linear dynamics by introducing past day climatic impacts behavior. This extraction is based on either simple physical understanding: Heating Degree Day (HDD), modified HDD or pattern recognition methods: Frechet Distance and Dynamic Time Warping (DTW). Four AI techniques have been considered: Artificial Neural Network (ANN), Support Vector Machine (SVM), Boosted Ensemble Decision Tree (BEDT) and Random forest (RF). In a first part, numerical simulations for six buildings (heat demand in the range [25 – 85 kWh/m².yr]) have been performed. The approach “relevant data” with (DTW, SVM) shows the best results. Real data of the building “Ecole des Mines de Nantes” proves the approach is still relevant.
357

Contribution à l'évaluation in situ des performances d'isolation thermique de l'enveloppe des bâtiments / In situ assessment of the thermal insulation performance of building envelopes

Thébault, Simon Romain 27 January 2017 (has links)
Dans un contexte d’économie d’énergie et de réduction des émissions de gaz à effet de serre, de nombreux efforts ont été réalisés en France pour renforcer l’isolation de l’enveloppe des bâtiments afin de contribuer à réduire les consommations de chauffage. Toutefois, il arrive souvent que la performance thermique calculée avant construction ou rénovation ne soit pas atteinte sur le terrain (erreur de calcul, défauts de mise en œuvre, etc.). Or, pour pouvoir généraliser la construction de bâtiments à basse consommation et la rénovation, il faut pouvoir garantir aux maîtres d'ouvrage une performance réelle de leur bâtiment après travaux. Le fait de mesurer in situ la performance intrinsèque d'isolation thermique de l'enveloppe permet de contribuer à cette garantie. Il existe à l’échelle internationale de nombreuses méthodes basées sur le suivi des consommations et des conditions thermiques intérieures et extérieures. Certaines ont déjà fait leurs preuves sur le terrain, mais sont souvent soit contraignantes, soit peu précises. Et surtout, les calculs d’incertitude associés sont souvent rudimentaires. L’objectif de ce travail financé par le CSTB est de consolider scientifiquement une nouvelle méthode de mesure de la qualité d’isolation globale d’un bâtiment à réception des travaux (méthode ISABELE). Dans le premier chapitre, un état de l'art sur les méthodes existantes a été réalisé afin de dégager des pistes d'amélioration sur la base d'une synthèse comparative. La piste prioritaire identifiée porte sur le calcul d'incertitude (un point central du problème). La propagation des erreurs aléatoires par un approche bayésienne ainsi que des erreurs systématiques par une approche plus classique feront l'objet de la méthodologie globale proposée dans le second chapitre. L'une des importantes sources d'incertitude porte sur l'évaluation du débit d'infiltration. La caractérisation de cette incertitude et de l'impact sur le résultat de mesure fera l'objet du troisième chapitre, avec un comparatif de différentes approches expérimentales (règle du pouce, modèles aérauliques, gaz traceur). Enfin, une amélioration de la prise en compte de la dynamique thermique du bâtiment au cours du test sera proposée dans le dernier chapitre. Son fondement repose sur l'adaptation du modèle thermique inverse en fonction du bâtiment et des conditions du test. Pour cela, une sélection parmi une banque de modèles simplifiés est réalisée sur la base de critères statistiques et du principe de parcimonie. Ces différentes dispositions ont été testés sur une large série de mesures menées sur un même bâtiment à ossature bois (chalet OPTIMOB). La robustesse et la précision du résultat de mesure ont ainsi pu être légèrement améliorées. La méthode de calcul du débit d'infiltration, ni trop simple ni trop complexe, a pu également être validée. Enfin, le temps de mesure minimal nécessaire a pu être déterminé en fonction de la classe d'inertie du bâtiment. / The global context of energy savings and greenhouse gases emissions control led to significant efforts in France to boost the thermal insulation in buildings in order to reduce heating consumption. Nevertheless, the stated thermal performance before construction or refurbishment is rarely achieved in practice, for many reasons (calculation errors, defects in materials or workmanship, etc.). Yet, guaranteeing the real thermal performance of buildings on the spot is crucial to enhance the refurbishment market and the construction of energy efficient buildings. To do so, measurement techniques of the intrinsinc thermal insulation performance indicators are needed. Such techniques already exist worldwide, and consist in processing the measurement data from the indoor and outdoor thermal conditions and the heat consumption. Some of them have already proved themselves in the field, but are either binding or very imprecise. And above all, the related uncertainty calculations are often rough. The objective of this thesis funded by CSTB is to consolidate a novel measurement method of the thermal insulation quality of a whole building after reception of work (ISABELE method). In the first chapter, a state of the art of the existing methods allows to identify possible ways to pursue this goal from a comparative synthesis. The primary reflection is about the uncertainty calculation method (which is a central issue). The second chapter presents a global methodology to combine the propagation of random and systematic errors from bayesian and classical approaches. One of the most important uncertainty sources deals with the infiltration air flow evaluation during the test. The third chapter investigates the characterization of this uncertainty, as well as its impact on the final result, depending on the chosen experimental approach (rule of thumb, simplified aeraulic models, tracer gases). Lastly, an improvement of the inclusion of the bluiding thermal dynamics during the test will be proposed in the last chapter. The basis of this improvement is to adapt the inverse model according to the building type and the test conditions. To do so, the proposed algorithm selects a model form a variety of simplified greybox models based on statistical criteria and parcimony. All these contributions have been tested on a large serie of measurements on a same timber-framed building (OPTIMOB shed). The robustness and precision of the results have been slightly improved. The intial infiltration air flow calculation, neither too simple of too complicated, has also been validated. Finaly, a better ordrer of magnitude of the minimal test duration has been determined, depending on the building inertia.
358

Validation expérimentale de modèles : application aux bâtiments basse consommation / Empirical validation of models : application to low-energy buildings

Bontemps, Stéphanie 02 December 2015 (has links)
Avec la généralisation de la construction des bâtiments basse consommation, passifs et à énergie positive, mais aussi la rénovation du parc existant, il est indispensable d’avoir recours à la simulation pour évaluer, entre autres, les performances énergétique et environnementale atteintes par ces nouveaux bâtiments. Les attentes en termes de garantie de performance énergétique étant de plus en plus importantes, il est primordial de s’assurer de la fiabilité des outils de simulation utilisés. En effet, les codes de simulation doivent être capables de représenter le comportement de ces nouveaux types de bâtiments de la façon la plus juste et fidèle possible. De plus, les incertitudes liées aussi bien aux paramètres de conception qu’aux différentes sollicitations ainsi qu’aux usages des bâtiments doivent être prises en compte pour pouvoir garantir la performance du bâtiment sur sa durée de vie.Cette thèse s’est intéressée à la validation expérimentale de modèles appliquée à un bâtiment de type cellule test. Cette méthodologie de validation se déroule en plusieurs étapes au cours desquelles on évalue la qualité du modèle en termes de justesse et de fidélité. Plusieurs cas d’études ont été menés sur lesquels nous avons pu identifier les paramètres les plus influents sur la sortie du modèle, examiner l’influence du pas de temps sur le processus de validation expérimentale, analyser l’influence de l’initialisation et confirmer l’aptitude de la méthodologie à tester le modèle. / Construction of low, passive and positive energy buildings is generalizing and existing buildings are being renovated. For this reason, it is essential to use simulation in order to estimate, among other things, energy and environmental performances reached by these new buildings. Expectations regarding guarantee of energy performance being more and more important, it is crucial to ensure the reliability of simulation tools being used. Indeed, simulation codes should reflect the behavior of these new kinds of buildings in the most consistent and accurate manner. Moreover, the uncertainty related to design parameters, as well as solicitations and building uses have to be taken into account in order to guarantee building energy performance during its lifetime.This thesis investigates the empirical validation of models applied to a test cell building. This validation process is divided into several steps, during which the quality of the model is evaluated as far as consistency and accuracy are concerned. Several study cases were carried out, from which we were able to identify the most influential parameters on model output, inspect the influence of time step on the empirical validation process, analyze the influence of initialization and confirm methodology’s ability to test the model.
359

Energy-Efficient Turbo Decoder for 3G Wireless Terminals

Al-Mohandes, Ibrahim January 2005 (has links)
Since its introduction in 1993, the turbo coding error-correction technique has generated a tremendous interest due to its near Shannon-limit performance. Two key innovations of turbo codes are parallel concatenated encoding and iterative decoding. In its IMT-2000 initiative, the International Telecommunication Union (ITU) adopted turbo coding as a channel coding standard for Third-Generation (3G) wireless high-speed (up to 2 Mbps) data services (cdma2000 in North America and W-CDMA in Japan and Europe). For battery-powered hand-held wireless terminals, energy consumption is a major concern. In this thesis, a new design for an energy-efficient turbo decoder that is suitable for 3G wireless high-speed data terminals is proposed. The Log-MAP decoding algorithm is selected for implementation of the constituent Soft-Input/Soft-Output (SISO) decoder; the algorithm is approximated by a fixed-point representation that achieves the best performance/complexity tradeoff. To attain energy reduction, a two-stage design approach is adopted. First, a novel dynamic-iterative technique that is appropriate for both good and poor channel conditions is proposed, and then applied to reduce energy consumption of the turbo decoder. Second, a combination of architectural-level techniques is applied to obtain further energy reduction; these techniques also enhance throughput of the turbo decoder and are area-efficient. The turbo decoder design is coded in the VHDL hardware description language, and then synthesized and mapped to a 0. 18<i>&mu;</i>m CMOS technology using the standard-cell approach. The designed turbo decoder has a maximum data rate of 5 Mb/s (at an upper limit of five iterations) and is 3G-compatible. Results show that the adopted two-stage design approach reduces energy consumption of the turbo decoder by about 65%. A prototype for the new turbo codec (encoder/decoder) system is implemented on a Xilinx XC2V6000 FPGA chip; then the FPGA is tested using the CMC Rapid Prototyping Platform (RPP). The test proves correct functionality of the turbo codec implementation, and hence feasibility of the proposed turbo decoder design.
360

Energy-Efficient Turbo Decoder for 3G Wireless Terminals

Al-Mohandes, Ibrahim January 2005 (has links)
Since its introduction in 1993, the turbo coding error-correction technique has generated a tremendous interest due to its near Shannon-limit performance. Two key innovations of turbo codes are parallel concatenated encoding and iterative decoding. In its IMT-2000 initiative, the International Telecommunication Union (ITU) adopted turbo coding as a channel coding standard for Third-Generation (3G) wireless high-speed (up to 2 Mbps) data services (cdma2000 in North America and W-CDMA in Japan and Europe). For battery-powered hand-held wireless terminals, energy consumption is a major concern. In this thesis, a new design for an energy-efficient turbo decoder that is suitable for 3G wireless high-speed data terminals is proposed. The Log-MAP decoding algorithm is selected for implementation of the constituent Soft-Input/Soft-Output (SISO) decoder; the algorithm is approximated by a fixed-point representation that achieves the best performance/complexity tradeoff. To attain energy reduction, a two-stage design approach is adopted. First, a novel dynamic-iterative technique that is appropriate for both good and poor channel conditions is proposed, and then applied to reduce energy consumption of the turbo decoder. Second, a combination of architectural-level techniques is applied to obtain further energy reduction; these techniques also enhance throughput of the turbo decoder and are area-efficient. The turbo decoder design is coded in the VHDL hardware description language, and then synthesized and mapped to a 0. 18<i>&mu;</i>m CMOS technology using the standard-cell approach. The designed turbo decoder has a maximum data rate of 5 Mb/s (at an upper limit of five iterations) and is 3G-compatible. Results show that the adopted two-stage design approach reduces energy consumption of the turbo decoder by about 65%. A prototype for the new turbo codec (encoder/decoder) system is implemented on a Xilinx XC2V6000 FPGA chip; then the FPGA is tested using the CMC Rapid Prototyping Platform (RPP). The test proves correct functionality of the turbo codec implementation, and hence feasibility of the proposed turbo decoder design.

Page generated in 0.079 seconds