• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 13
  • 12
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 47
  • 35
  • 27
  • 26
  • 24
  • 24
  • 22
  • 19
  • 17
  • 16
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Homing-Architekturen für Multi-Layer Netze: Netzkosten-Optimierung und Leistungsbewertung / Homing Architectures in Multi-Layer Networks: Cost Optimization and Performance Analysis

Palkopoulou, Eleni 21 December 2012 (has links) (PDF)
Die schichtenübergreifende Steuerung von Multi-Layer Netzen ermöglicht die Realisierung fortgeschrittener Netzarchitekturen sowie neuartiger Konzepte zur Steigerung der Ausfallsicherheit. Gegenstand dieser Arbeit ist ein neues ressourcensparendes Konzept zur Kompensation von Core-Router-Ausfallen in IP-Netzen. Core-Router-Ausfälle führen zur Abkopplung der an Ihnen angeschlossenen Zugangsrouter vom Netz. Daher werden die Zugangsrouter üblicherweise mit jeweils zwei oder mehreren verschiedenen Core-Routern verbunden (engl.: dual homing) was jedoch eine Verdoppelung der Anschlusskapazität im IP Netz bedingt. Bei dem neuen Verfahren - Dual Homing mit gemeinsam genutzten Router-Ersatzressourcen (engl.: dual homing with shared backup router resources, DH-SBRR) - erfolgt die Zugangsrouter-Anbindung zum einen zu einem Core-Router des IP-Netzes und zum anderen zu einem Netzelement der darunterliegenden Transportschicht. Damit lassen sich Router-Ersatzressourcen, die im IP-Netz an beliebigen Stellen vorgehalten werden können, uber das Transportnetz an die Stelle eines ausgefallenen Core-Routers schalten. Die Steuerung dieser Ersatzschaltung geschieht über eine schichten übergreifende, d.h. das Transportnetz- und IP-Netz umfassende Control-Plane - beispielsweise auf Basis von GMPLS. Da beim Umschalten der Routerressourcen auch aktuelle Zustände (bspw. Routing-Tabellen) auf die Router-Ersatzressourcen mit übertragen werden müssen, beinhaltet das neue Verfahren auch Konzepte zur Router-Virtualisierung. Zum Vergleich und zur Bewertung der Leistungsfähigkeit des neuen DH-SBRR Verfahrens werden in der Arbeit verschiedene Zugangsrouter-Homing-Varianten hinsichtlich Netz-Kosten, Netz-Verfügbarkeit, Recovery-Zeit und Netz-Energieverbrauch gegenübergestellt. Als Multi-Layer Netzszenarien werden zum einen IP über WDM und zum anderen IP über OTN (ODU) betrachtet. Zur Bestimmung der minimalen Netz-Kosten ist ein generisches Multi-Layer Netzoptimierungsmodell entwickelt worden, welches bei unterschiedlichen Homing-Architekturen angewendet werden kann. Neben dem Optimierungsmodell zur Netzkostenminimierung wird auch eine Modellvariante zur Minimierung des Energieverbrauchs vorgestellt. Um die Rechenzeit für die Lösung der Optimierungsprobleme zu verringern und damit auch größere Netzszenarien untersuchen zu können bedarf es heuristischer Lösungsverfahren. Im Rahmen der Arbeit ist daher eine neue speziell auf die Multilayer-Optimierungsprobleme zugeschnittene Lösungsheuristik entwickelt worden. Aus der Netzkosten-Optimierung ergibt sich, dass durch den Einsatz von DH-SBBR signifikante Kosteneinsparungen im Vergleich zu herkömmlichen Homing-Architekturen realisiert werden können. Änderungen der Verkehrslast, der Kosten der IP-Netzelemente oder der Netztopologie haben keinen signifikanten Einfluss auf dieses Ergebnis. Neben dem Kosten- und Energieeinsparungspotential sind auch die Auswirkungen auf die Netz-Verfügbarkeit und die Recovery-Zeit untersucht worden. Für die Ende-zu-Ende Verfügbarkeit bei Anwendung der verschiedenen Homing-Architekturen Können untere Grenzwerte angegeben werden. Zur Bestimmung der Recovery-Zeit bei Einsatz von DH-SBRR ist ein eigenes analytisches Berechnungsmodell entwickelt und evaluiert worden. Damit kann das DH-SBRR Verfahren zur Einhaltung vorgegebener Recovery-Zeiten (wie sie für bspw. Für bestimmte Dienste gefordert werden) entsprechend parametriert werden. / The emergence of multi-layer networking capabilities opens the path for the development of advanced network architectures and resilience concepts. In this dissertation we propose a novel resource-efficient homing scheme: dual homing with shared backup router resources. The proposed scheme realizes shared router-level redundancy, enabled by the emergence of control plane architectures such as generalized multi-protocol label switching. Additionally, virtualization schemes complement the proposed architecture. Different homing architectures are examined and compared under the prism of cost, availability, recovery time and energy efficiency. Multiple network layers are considered in Internet protocol over wavelength division multiplexing as well as Internet protocol over optical data unit settings - leading to the development of multi-layer optimization techniques. A generic multi-layer network design mathematical model, which can be applied to different homing architecture considerations, is developed. The optimization objective can be adapted to either minimizing the cost for network equipment or the power consumption of the network. In order to address potential issues with regard to computational complexity, we develop a novel heuristic approach specifically targeting the proposed architecture. It is shown that significant cost savings can be achieved - even under extreme changes in the traffic demand volume, in the cost for different types of network equipment, as well as in the network topology characteristics. In order to evaluate occurring tradeoffs in terms of performance, we study the effects on availability and recovery time. We proceed to derive lower bounds on end-to-end availability for the different homing architectures. Additionally, an analytical recovery time model is developed and evaluated. We investigate how service-imposed maximum outage requirements have a direct effect on the setting of the proposed architecture.
102

Homing-Architekturen für Multi-Layer Netze: Netzkosten-Optimierung und Leistungsbewertung / Homing Architectures in Multi-Layer Networks: Cost Optimization and Performance Analysis

Palkopoulou, Eleni 13 December 2012 (has links)
Die schichtenübergreifende Steuerung von Multi-Layer Netzen ermöglicht die Realisierung fortgeschrittener Netzarchitekturen sowie neuartiger Konzepte zur Steigerung der Ausfallsicherheit. Gegenstand dieser Arbeit ist ein neues ressourcensparendes Konzept zur Kompensation von Core-Router-Ausfallen in IP-Netzen. Core-Router-Ausfälle führen zur Abkopplung der an Ihnen angeschlossenen Zugangsrouter vom Netz. Daher werden die Zugangsrouter üblicherweise mit jeweils zwei oder mehreren verschiedenen Core-Routern verbunden (engl.: dual homing) was jedoch eine Verdoppelung der Anschlusskapazität im IP Netz bedingt. Bei dem neuen Verfahren - Dual Homing mit gemeinsam genutzten Router-Ersatzressourcen (engl.: dual homing with shared backup router resources, DH-SBRR) - erfolgt die Zugangsrouter-Anbindung zum einen zu einem Core-Router des IP-Netzes und zum anderen zu einem Netzelement der darunterliegenden Transportschicht. Damit lassen sich Router-Ersatzressourcen, die im IP-Netz an beliebigen Stellen vorgehalten werden können, uber das Transportnetz an die Stelle eines ausgefallenen Core-Routers schalten. Die Steuerung dieser Ersatzschaltung geschieht über eine schichten übergreifende, d.h. das Transportnetz- und IP-Netz umfassende Control-Plane - beispielsweise auf Basis von GMPLS. Da beim Umschalten der Routerressourcen auch aktuelle Zustände (bspw. Routing-Tabellen) auf die Router-Ersatzressourcen mit übertragen werden müssen, beinhaltet das neue Verfahren auch Konzepte zur Router-Virtualisierung. Zum Vergleich und zur Bewertung der Leistungsfähigkeit des neuen DH-SBRR Verfahrens werden in der Arbeit verschiedene Zugangsrouter-Homing-Varianten hinsichtlich Netz-Kosten, Netz-Verfügbarkeit, Recovery-Zeit und Netz-Energieverbrauch gegenübergestellt. Als Multi-Layer Netzszenarien werden zum einen IP über WDM und zum anderen IP über OTN (ODU) betrachtet. Zur Bestimmung der minimalen Netz-Kosten ist ein generisches Multi-Layer Netzoptimierungsmodell entwickelt worden, welches bei unterschiedlichen Homing-Architekturen angewendet werden kann. Neben dem Optimierungsmodell zur Netzkostenminimierung wird auch eine Modellvariante zur Minimierung des Energieverbrauchs vorgestellt. Um die Rechenzeit für die Lösung der Optimierungsprobleme zu verringern und damit auch größere Netzszenarien untersuchen zu können bedarf es heuristischer Lösungsverfahren. Im Rahmen der Arbeit ist daher eine neue speziell auf die Multilayer-Optimierungsprobleme zugeschnittene Lösungsheuristik entwickelt worden. Aus der Netzkosten-Optimierung ergibt sich, dass durch den Einsatz von DH-SBBR signifikante Kosteneinsparungen im Vergleich zu herkömmlichen Homing-Architekturen realisiert werden können. Änderungen der Verkehrslast, der Kosten der IP-Netzelemente oder der Netztopologie haben keinen signifikanten Einfluss auf dieses Ergebnis. Neben dem Kosten- und Energieeinsparungspotential sind auch die Auswirkungen auf die Netz-Verfügbarkeit und die Recovery-Zeit untersucht worden. Für die Ende-zu-Ende Verfügbarkeit bei Anwendung der verschiedenen Homing-Architekturen Können untere Grenzwerte angegeben werden. Zur Bestimmung der Recovery-Zeit bei Einsatz von DH-SBRR ist ein eigenes analytisches Berechnungsmodell entwickelt und evaluiert worden. Damit kann das DH-SBRR Verfahren zur Einhaltung vorgegebener Recovery-Zeiten (wie sie für bspw. Für bestimmte Dienste gefordert werden) entsprechend parametriert werden. / The emergence of multi-layer networking capabilities opens the path for the development of advanced network architectures and resilience concepts. In this dissertation we propose a novel resource-efficient homing scheme: dual homing with shared backup router resources. The proposed scheme realizes shared router-level redundancy, enabled by the emergence of control plane architectures such as generalized multi-protocol label switching. Additionally, virtualization schemes complement the proposed architecture. Different homing architectures are examined and compared under the prism of cost, availability, recovery time and energy efficiency. Multiple network layers are considered in Internet protocol over wavelength division multiplexing as well as Internet protocol over optical data unit settings - leading to the development of multi-layer optimization techniques. A generic multi-layer network design mathematical model, which can be applied to different homing architecture considerations, is developed. The optimization objective can be adapted to either minimizing the cost for network equipment or the power consumption of the network. In order to address potential issues with regard to computational complexity, we develop a novel heuristic approach specifically targeting the proposed architecture. It is shown that significant cost savings can be achieved - even under extreme changes in the traffic demand volume, in the cost for different types of network equipment, as well as in the network topology characteristics. In order to evaluate occurring tradeoffs in terms of performance, we study the effects on availability and recovery time. We proceed to derive lower bounds on end-to-end availability for the different homing architectures. Additionally, an analytical recovery time model is developed and evaluated. We investigate how service-imposed maximum outage requirements have a direct effect on the setting of the proposed architecture.
103

Predicting PV self-consumption in villas with machine learning

GALLI, FABIAN January 2021 (has links)
In Sweden, there is a strong and growing interest in solar power. In recent years, photovoltaic (PV) system installations have increased dramatically and a large part are distributed grid connected PV systems i.e. rooftop installations. Currently the electricity export rate is significantly lower than the import rate which has made the amount of self-consumed PV electricity a critical factor when assessing the system profitability. Self-consumption (SC) is calculated using hourly or sub-hourly timesteps and is highly dependent on the solar patterns of the location of interest, the PV system configuration and the building load. As this varies for all potential installations it is difficult to make estimations without having historical data of both load and local irradiance, which is often hard to acquire or not available. A method to predict SC using commonly available information at the planning phase is therefore preferred.  There is a scarcity of documented SC data and only a few reports treating the subject of mapping or predicting SC. Therefore, this thesis is investigating the possibility of utilizing machine learning to create models able to predict the SC using the inputs: Annual load, annual PV production, tilt angle and azimuth angle of the modules, and the latitude. With the programming language Python, seven models are created using regression techniques, using real load data and simulated PV data from the south of Sweden, and evaluated using coefficient of determination (R2) and mean absolute error (MAE). The techniques are Linear Regression, Polynomial regression, Ridge Regression, Lasso regression, K-Nearest Neighbors (kNN), Random Forest, Multi-Layer Perceptron (MLP), as well as the only other SC prediction model found in the literature. A parametric analysis of the models is conducted, removing one variable at a time to assess the model’s dependence on each variable.  The results are promising, with five out of eight models achieving an R2 value above 0.9 and can be considered good for predicting SC. The best performing model, Random Forest, has an R2 of 0.985 and a MAE of 0.0148. The parametric analysis also shows that while more input data is helpful, using only annual load and PV production is sufficient to make good predictions. This can only be stated for model performance for the southern region of Sweden, however, and are not applicable to areas outside the latitudes or country tested. / I Sverige finns ett starkt och växande intresse för solenergi. De senaste åren har antalet solcellsanläggningar ökat dramatiskt och en stor del är distribuerade nätanslutna solcellssystem, dvs takinstallationer. För närvarande är elexportpriset betydligt lägre än importpriset, vilket har gjort mängden egenanvänd solel till en kritisk faktor vid bedömningen av systemets lönsamhet. Egenanvändning (EA) beräknas med tidssteg upp till en timmes längd och är i hög grad beroende av solstrålningsmönstret för platsen av intresse, PV-systemkonfigurationen och byggnadens energibehov. Eftersom detta varierar för alla potentiella installationer är det svårt att göra uppskattningar utan att ha historiska data om både energibehov och lokal solstrålning, vilket ofta inte är tillgängligt. En metod för att förutsäga EA med allmän tillgänglig information är därför att föredra.  Det finns en brist på dokumenterad EA-data och endast ett fåtal rapporter som behandlar kartläggning och prediktion av EA. I denna uppsats undersöks möjligheten att använda maskininlärning för att skapa modeller som kan förutsäga EA. De variabler som ingår är årlig energiförbrukning, årlig solcellsproduktion, lutningsvinkel och azimutvinkel för modulerna och latitud. Med programmeringsspråket Python skapas sju modeller med hjälp av olika regressionstekniker, där energiförbruknings- och simulerad solelproduktionsdata från södra Sverige används. Modellerna utvärderas med hjälp av determinationskoefficienten (R2) och mean absolute error (MAE). Teknikerna som används är linjär regression, polynomregression, Ridge regression, Lasso regression, K-nearest neighbor regression, Random Forest regression, Multi-Layer Perceptron regression. En additionell linjär regressions-modell skapas även med samma metodik som används i en tidigare publicerad rapport. En parametrisk analys av modellerna genomförs, där en variabel exkluderas åt gången för att bedöma modellens beroende av varje enskild variabel.  Resultaten är mycket lovande, där fem av de åtta undersökta modeller uppnår ett R2-värde över 0,9. Den bästa modellen, Random Forest, har ett R2 på 0,985 och ett MAE på 0,0148. Den parametriska analysen visar också att även om ingångsdata är till hjälp, är det tillräckligt att använda årlig energiförbrukning och årlig solcellsproduktion för att göra bra förutsägelser. Det måste dock påpekas att modellprestandan endast är tillförlitlig för södra Sverige, från var beräkningsdata är hämtad, och inte tillämplig för områden utanför de valda latituderna eller land.
104

PV self-consumption: Regression models and data visualization

Tóth, Martos January 2022 (has links)
In Sweden the installed capacity of the residential PV systems is increasing every year. The lack of feed-in-tariff-scheme makes the techno-economic optimization of the PV systems mainly based on the self-consumption. The calculation of this parameter involves hourly building loads and hourly PV generation. This data cannot be obtained easily from households. A predictive model based on already available data would be preferred and needed in this case. The already available machine learning models can be suitable and have been tested but the amount of literature in this topic is fairly low. The machine learning models are using a dataset which includes real measurement data of building loads and simulated PV generation data and the calculated self-consumption data based on these two inputs. The simulation of PV generation can be based on Typical Meteorological Year (TMY) weather file or on measured weather data. The TMY file can be generated quicker and more easily, but it is only spatially matched to the building load, while the measured data is matched temporally and spatially. This thesis investigates if the usage of TMY file leads to any major impact on the performance of the regression models by comparing it to the measured weather file model. In this model the buildings are single-family houses from south Sweden region.  The different building types can have different load profiles which can affect the performance of the model. Because of the different load profiles, the effect of using TMY file may have more significant impact. This thesis also compares the impact of the TMY file usage in the case of multifamily houses and also compares the two building types by performance of the machine learning models. The PV and battery prices are decreasing from year to year. The subsidies in Sweden offer a significant tax credit on battery investments with PV systems. This can make the batteries profitable. Lastly this thesis evaluates the performance of the machine learning models after adding the battery to the system for both TMY and measured data. Also, the optimal system is predicted based on the self-consumption, PV generation and battery size.  The models have high accuracy, the random forest model is above 0.9 R2for all cases. The results confirm that using the TMY file only leads to marginal errors, and it can be used for the training of the models. The battery model has promising results with above 0.9 R2 for four models: random forest, k-NN, MLP and polynomial. The prediction of the optimal system model has promising results as well for the polynomial model with 18% error in predicted payback time compared to the reference. / I Sverige ökar den installerade kapaciteten för solcellsanläggningarna för bostäder varje år. Bristen på inmatningssystem gör att den tekniska ekonomiska optimeringen av solcellssystemen huvudsakligen bygger på egen konsumtion. Beräkningen av denna parameter omfattar byggnadsbelastningar per timme och PV-generering per timme. Dessa uppgifter kan inte lätt erhållas från hushållen. En prediktiv modell baserad på redan tillgängliga data skulle vara att föredra och behövas i detta fall. De redan tillgängliga maskininlärningsmodellerna kan vara lämpliga och redan testade men mängden litteratur i detta ämne är ganska låg. Maskininlärningsmodellerna använder en datauppsättning som inkluderar verkliga mätdata från byggnader och simulerad PV-genereringsdata och den beräknade egenförbrukningsdata baserad på dessa två indata. Simuleringen av PV-generering kan baseras på väderfilen Typical Meteorological Year (TMY) eller på uppmätta väderdata. TMY-filen kan genereras snabbare och enklare, men den anpassas endast rumsligt till byggnadsbelastningen, medan uppmätta data är temporärt och rumsligt. Denna avhandling undersöker om användningen av TMY-fil leder till någon större påverkan på prestandan genom att jämföra den med den uppmätta väderfilsmodellen. I denna modell är byggnaderna småhus från södra Sverige. De olika byggnadstyperna kan ha olika belastningsprofiler vilket kan påverka modellens prestanda. På grund av dessa olika belastningsprofiler kan effekten av att använda TMY-fil ha mer betydande inverkan. Den här avhandlingen jämför också effekten av TMY-filanvändningen i fallet med flerfamiljshus och jämför också de två byggnadstyperna efter prestanda för maskininlärningsmodellerna. PV- och batteripriserna minskar från år till år. Subventionerna i Sverige ger en betydande skattelättnad på batteriinvesteringar med solcellssystem. Detta kan göra batterierna lönsamma. Slutligen utvärderar denna avhandling prestandan för maskininlärningsmodellerna efter att ha lagt till batteriet i systemet för både TMY och uppmätta data. Det optimala systemet förutsägs också baserat på egen förbrukning, årlig byggnadsbelastning, årlig PV-generering och batteristorlek. Modellerna har hög noggrannhet, den slumpmässiga skogsmodellen är över 0,9 R2 för alla fall. Resultaten bekräftar att användningen av TMY-filen endast leder till marginella fel, och den kan användas för träning av modellerna. Batterimodellen har lovande resultat med över 0,9 R2 för fyra modeller: random skog, k-NN, MLP och polynom. Förutsägelsen av den optimala systemmodellen har också lovande resultat för polynommodellen med 18 % fel i förutspådd återbetalningstid jämfört med referensen.
105

The corpus of Greek medical papyri and digital papyrology

Reggiani, Nicola 20 April 2016 (has links) (PDF)
The ongoing project of digitising a corpus of ancient Greek texts on papyrus dealing with medical topics raises some problematic questions involving general issues of digital papyrology. The main electronic resource of papyrological texts, the Papyrological Navigator (papyri.info), has indeed been designed to host documentary items, while the special technical, even literary nature of medical papyri (which include, besides documents related to medicine, also handbooks, school books, and treatises by both known and unknown authors) requires new ways to treat the relevant data (paratextual devices such as diacriticals, punctuation, abbreviatios, layout features). Such issues are currently under discussion by the team charged of the forthcoming Digital Corpus of Literary Papyri (DCLP), but further options need to be taken into consideration in order to develop a fully functional, interactive, dynamic database of ancient technical texts: in particular, this paper will present and discuss the potentialities of a multi-layer linguistic annotation (useful to fulfil the needs of a multifaceted technical language) and of a multitextual digital edition (helpful in consideration of the fragmentary condition of the texts and of their often problematic relationship with the known manuscript tradition).
106

Low cost integration of Electric Power-Assisted Steering (EPAS) with Enhanced Stability Program (ESP)

Soltani, Amirmasoud January 2014 (has links)
Vehicle Dynamics Control (VDC) systems (also known as Active Chassis systems) are mechatronic systems developed for improving vehicle comfort, handling and/or stability. Traditionally, most of these systems have been individually developed and manufactured by various suppliers and utilised by automotive manufacturers. These decentralised control systems usually improve one aspect of vehicle performance and in some cases even worsen some other features of the vehicle. Although the benefit of the stand-alone VDC systems has been proven, however, by increasing the number of the active systems in vehicles, the importance of controlling them in a coordinated and integrated manner to reduce the system complexity, eliminate the possible conflicts as well as expand the system operational envelope, has become predominant. The subject of Integrated Vehicle Dynamics Control (IVDC) for improving the overall vehicle performance in the existence of several VDC active systems has recently become the topic of many research and development activities in both academia and industries Several approaches have been proposed for integration of vehicle control systems, which range from the simple and obvious solution of networking the sensors, actuators and processors signals through different protocols like CAN or FlexRay, to some sort of complicated multi-layered, multi-variable control architectures. In fact, development of an integrated control system is a challenging multidisciplinary task and should be able to reduce the complexity, increase the flexibility and improve the overall performance of the vehicle. The aim of this thesis is to develop a low-cost control scheme for integration of Electric Power-Assisted Steering (EPAS) system with Enhanced Stability Program (ESP) system to improve driver comfort as well as vehicle safety. In this dissertation, a systematic approach toward a modular, flexible and reconfigurable control architecture for integrated vehicle dynamics control systems is proposed which can be implemented in real time environment with low computational cost. The proposed control architecture, so named “Integrated Vehicle Control System (IVCS)”, is customised for integration of EPAS and ESP control systems. IVCS architecture consists of three cascade control loops, including high-level vehicle control, low-level (steering torque and brake slip) control and smart actuator (EPAS and EHB) control systems. The controllers are designed based on Youla parameterisation (closed-loop shaping) method. A fast, adaptive and reconfigurable control allocation scheme is proposed to coordinate the control of EPAS and ESP systems. An integrated ESP & ESP HiL/RCP system including the real EPAS and Electro Hydraulic Brake (EHB) smart actuators integrated with a virtual vehicle model (using CarMaker/HiL®) with driver in the loop capability is designed and utilised as a rapid control development platform to verify and validate the developed control systems in real time environment. Integrated Vehicle Dynamic Control is one of the most promising and challenging research and development topics. A general architecture and control logic of the IVDC system based on a modular and reconfigurable control allocation scheme for redundant systems is presented in this research. The proposed fault tolerant configuration is applicable for not only integrated control of EPAS and ESP system but also for integration of other types of the vehicle active systems which could be the subject of future works.
107

Transient engine model for calibration using two-stage regression approach

Khan, Muhammad Alam Z. January 2011 (has links)
Engine mapping is the process of empirically modelling engine behaviour as a function of adjustable engine parameters, predicting the output of the engine. The aim is to calibrate the electronic engine controller to meet decreasing emission requirements and increasing fuel economy demands. Modern engines have an increasing number of control parameters that are having a dramatic impact on time and e ort required to obtain optimal engine calibrations. These are further complicated due to transient engine operating mode. A new model-based transient calibration method has been built on the application of hierarchical statistical modelling methods, and analysis of repeated experiments for the application of engine mapping. The methodology is based on two-stage regression approach, which organise the engine data for the mapping process in sweeps. The introduction of time-dependent covariates in the hierarchy of the modelling led to the development of a new approach for the problem of transient engine calibration. This new approach for transient engine modelling is analysed using a small designed data set for a throttle body inferred air ow phenomenon. The data collection for the model was performed on a transient engine test bed as a part of this work, with sophisticated software and hardware installed on it. Models and their associated experimental design protocols have been identi ed that permits the models capable of accurately predicting the desired response features over the whole region of operability. Further, during the course of the work, the utility of multi-layer perceptron (MLP) neural network based model for the multi-covariate case has been demonstrated. The MLP neural network performs slightly better than the radial basis function (RBF) model. The basis of this comparison is made on assessing relevant model selection criteria, as well as internal and external validation ts. Finally, the general ability of the model was demonstrated through the implementation of this methodology for use in the calibration process, for populating the electronic engine control module lookup tables.
108

Étude des aspects cinétiques et thermodynamiques gouvernant la perméabilité de modèles d’essence à l’interface de deux matériaux polymères barrières : application à l’optimisation de réservoirs pour carburants / Study of the kinetic and thermodynamic aspects controlling the permeability of gasoline models at the interface of two polymeric barrier materials : application to the optimization of fuel tanks

Zhao, Jing 14 December 2010 (has links)
Répondant à une forte demande de sécurité, d’économie de poids et d’optimisation du volume utile, les réservoirs pour carburants sont actuellement généralement constitués d’une paroi barrière polymère multicouche visant à limiter les émissions de vapeurs dans l’atmosphère. Etre capable de prédire les perméabilités est primordial pour l’optimisation de telles structures. Grâce à des automates conçus au laboratoire, les mesures de sorption et de perméabilité ont été réalisées pour trois polymères leaders du domaine (PEHD, Liant et EVOH) et des mélanges modèles de carburants composés d’éthanol, d’iso-octane et de toluène. Les propriétés de sorption ont été modélisées par UNIQUAC et un nouveau modèle inédit SORPFIT. Les paramètres des lois de diffusion, de type TSVF2 ou Long généralisé, ont aussi été optimisés pour chaque polymère malgré une difficulté particulière pour l’EVOH. Une méthodologie originale a été ensuite proposée pour la prédiction des flux partiels des multicouches à partir des paramètres caractéristiques des monocouches correspondantes. Selon la nature et la disposition de chaque couche, deux cas de figures ont été identifiés : la limitation cinétique et la limitation thermodynamique du transfert, cette dernière étant estimée à partir des modèles de sorption initialement optimisés. La confrontation des calculs avec les mesures expérimentales réalisées pour des films bicouches et tricouches d’Arkema montre des prédictions très satisfaisantes. Cette approche est finalement étendue à la simulation de la perméabilité de structures multicouches plus complexes et plus représentatives des réservoirs pour carburants industriels / Responding to a strong demand for security, weight reduction and volume optimization, the fuel tanks are currently usually made of polymer multi-layer barriers in order to limit vapour emissions into the atmosphere. The prediction of their permeability remains a world-wide critical challenge for the multi-layer optimization. Thanks to original semi-automated experimental set-ups, sorption and permeability measurements were carried out for three leading polymer materials (HDPE, EVOH and Binder) and model fuel mixtures of ethanol, iso-octane and toluene. The modelling of the sorption properties was successfully achieved by the UNIQUAC model and a new model called SORPFIT. The parameters of the diffusion laws according to the TSVF2 or the generalized Long models were also optimized for each polymer despite some difficulty with EVOH. An original methodology was then proposed for predicting the partial fluxes of polymer multi-layers from the characteristic parameters of the corresponding mono-layers. Depending on the nature and disposition of each layer, two scenarios were identified: the kinetics limitation and the thermodynamics limitation of mass transfer, the latter being estimated from the sorption models initially optimized. The comparison of the calculated fluxes with the experimental data obtained for bi-layer and tri-layer films provided by the world-wide industrial company Arkema showed that the predictions were very satisfying. This approach was then extended to the simulation of the permeability of more complex multi-layer structures which are more representative of commercial fuel tanks
109

Métamatériaux pour l’infrarouge et applications / Metamaterials for the infrared and applications

Ghasemi, Rasta 12 November 2012 (has links)
Les métamatériaux sont des composites artificiels présentant des propriétés électromagnétiques qu’on ne trouve pas dans la nature. Malgré des développements spectaculaires durant la dernière décennie, le potentiel de ces structures aux longueurs d’ondes optique n’est pas encore clairement défini en raison de problèmes technologiques et de contraintes physiques telles que les pertes dans les métaux entrant dans la composition des métamatériaux. Dans notre thèse, nous montrons que les métamatériaux ont des propriétés très favorables dans le contexte de l’optique intégrée dans le proche infrarouge. Nous avons développé une stratégie pour incorporer des métamatériaux dans des circuits photoniques qui n’absorbent que très peu d’énergie. Pour cela, nous ne faisons pas directement agir l’ensemble du mode guidé avec les métamatériaux, mais seulement une composante évanescente à l’extérieur du guide. Pour réaliser un tel adaptateur ou d’autres fonctionnalités, il importe de déterminer quelle géométrie de métamatériaux est la plus favorable aux applications infrarouges. Nous proposons d’utiliser des structures à base de fils d’or empilés couche sur couche. A l’aide de simulations numériques et d’expériences en espace libre, nous montrons qu’il est possible d’obtenir toute une gamme de réponses optiques en contrôlant le couplage entre les différents niveaux de fils, c'est-à-dire en ajustant la distance entre les fils ainsi que leur alignement. En particulier, nous avons réussi à contrôler séparément la réponse électrique et magnétique de nos structures, ce qui offre une flexibilité de conception qui ne se rencontre pas dans les métamatériaux proposés jusqu’à présent. / Metamaterials are artificial composites with electromagnetic properties not found in nature. Although the development of metamaterials has experienced a tremendous growth over the past few years, their potential at optical wavelengths is not clearly established due to technological and physical constraints such as high material losses in this spectral range. Here we show that metamaterials have a great potential in the context of integrated optics in the near infrared. We developed a strategy to incorporate metamaterials in photonic circuits with minimal absorption losses. Our approach relies on making the guided modes interact with the metamaterials only through the evanescent tail outside the waveguide. To achieve such an adaptor and other functionalities, it is important to know what is the best geometry for near-infrared applications. We propose to use metamaterials based on multi-layers of Au cut wires. With numerical simulations and experiments, we show that it is possible to create a wide range of optical properties by controlling the interaction between the wires, i.e. by adjusting the distance between the wires and their alignment. In particular we were able to demonstrate
110

Redes neurais e algoritmos genéticos no estudo quimiossistemático da família Asteraceae / Neural Network and Genetic Algorithms in the Chemosystematic study of Asteraceae Family

Correia, Mauro Vicentini 16 March 2010 (has links)
No presente trabalho duas metodologias da área de inteligência artificial (Redes Neurais e Algoritmos Genéticos) foram utilizadas para realizar um estudo Quimiossistemático da família Asteraceae. A família Asteraceae é uma das maiores famílias entre as Angiospermas, conta com aproximadamente 24.000 espécies. As espécies da família produzem grande diversidade de metabólitos secundários, entre os quais merecem destaque os terpenóides, poliacetilenos, flavonóides e cumarinas. Para um melhor entendimento da diversidade química da família construiu-se um Banco de Dados com as ocorrências de doze classes de metabólitos (monoterpenos, sesquiterpenos, sesquiterpenos lactonizados, diterpenos, triterpenos, cumarinas, flavonóides, poliacetilenos, benzofuranos, benzopiranos, acetofenonas e fenilpropanóides) produzidos pelas espécies da família. A partir desse banco três diferentes estudos foram realizados. No primeiro estudo, utilizando os mapas auto-organizáveis de Kohonen e o banco de dados químico classificado segundo duas das mais recentes filogenias da família foi possível realizar com sucesso separações de tribos e gêneros da família Asteraceae. Também foi possível indicar que a informação química concorda mais com a filogenia de Funk (Funk et al. 2009) do que com a filogenia de Bremer (Bremer 1994, 1996). No estudo seguinte, onde se objetivou a criação de modelos de previsão dos números de ocorrências das doze classes de metabólitos, utilizando o perceptron de múltiplas camadas com algoritmo de retropropagação de erro, o resultado foi insatisfatório. Apesar de em algumas classes de metabólitos a fase de treino da rede apresentar resultados satisfatórios, a fase de teste mostrou que os modelos criados não são capazes de realizar previsão para dados aos quais eles não foram submetidos na fase de treino, e portanto não são modelos adequados para realizar previsões. Finalmente, o terceiro estudo consistiu na criação de modelos de regressão linear utilizando como método de seleção de variáveis os algoritmos genéticos. Nesse estudo foi possível indicar que os monoterpenos e os sesquiterpenos são bastante relacionados biossinteticamente, também foi possível indicar que existem relações biossintéticas entre monoterpenos e diterpenos e entre sesquiterpenos e triterpenos / In this study two methods of artificial intelligence (neural network and genetic algorithms) were used to work out a Chemosystematic study of the Asteraceae family. The family Asteraceae is one of the largest families among the Angiosperms, having about 24,000 species. The species of the family produce a large diversity of secondary metabolites, and some worth mentioning are the terpenoids, polyacetylenes, flavonoids and coumarins. For a better understanding of the chemical diversity of the family a database was built up with the occurrences of twelve classes of metabolites (monoterpenes, sesquiterpenes, lactonizadossesquiterpenes, diterpenes, triterpenes, coumarins, flavonoids, polyacetylenes, Benzofurans, benzopyrans, acetophenones and phenylpropanoids) produced by species of the family. From this database three different studies were conducted. In the first study, using the Kohonen self-organized map and the chemical data classified according to two of the most recent phylogenies of the family, it was possible to successfully separatethe tribes and genera of the Asteraceae family. It was also possible to indicate that the chemical information agrees with the phylogeny of Funk (Funk et al. 2009) than with the phylogeny of Bremer (Bremer 1994, 1996). In the next study, which aims at creating models to predict the number of occurrences of the twelve classes of metabolites using multi-layer perceptron with backpropagation algorithm error, the result was found unsatisfactory. Although in some classes of metabolites the training phase of the network has satisfactory results, the test phase showed that the models created are not able to make prevision for data to which they were submitted in the training phase and thus are not suitable models for predictions. Finally, the third study was the creation of linear regression models using a genetic algorithm method of variable selection. This study could indicate that the monoterpenes and sesquiterpenes are closely related biosynthetically, and was also possible to indicate that there are biosynthetic relations between monoterpenes and diterpenes and between sesquiterpenes and triterpenes

Page generated in 0.0288 seconds