Spelling suggestions: "subject:"model based."" "subject:"godel based.""
421 |
Development of a New 3D Reconstruction Algorithm for Computed Tomography (CT)Iborra Carreres, Amadeo 07 January 2016 (has links)
[EN] Model-based computed tomography (CT) image reconstruction is dominated by iterative algorithms. Although long reconstruction times remain as a barrier in practical applications, techniques to speed up its convergence are object of investigation, obtaining impressive results. In this thesis, a direct algorithm is proposed for model-based image reconstruction. The model-based approximation relies on the construction of a model matrix that poses a linear system which solution is the reconstructed image. The proposed algorithm consists in the QR decomposition of this matrix and the resolution of the system by a backward substitution process. The cost of this image reconstruction technique is a matrix vector multiplication and a backward substitution process, since the model construction and the QR decomposition are performed only once, because of each image reconstruction corresponds to the resolution of the same CT system for a different right hand side.
Several problems regarding the implementation of this algorithm arise, such as the exact calculation of a volume intersection, definition of fill-in reduction strategies optimized for CT model matrices, or CT symmetry exploit to reduce the size of the system. These problems have been detailed and solutions to overcome them have been proposed, and as a result, a proof of concept implementation has been obtained.
Reconstructed images have been analyzed and compared against the filtered backprojection (FBP) and maximum likelihood expectation maximization (MLEM) reconstruction algorithms, and results show several benefits of the proposed algorithm. Although high resolutions could not have been achieved yet, obtained results also demonstrate the prospective of this algorithm, as great performance and scalability improvements would be achieved with the success in the development of better fill-in strategies or additional symmetries in CT geometry. / [ES] En la reconstrucción de imagen de tomografía axial computerizada (TAC), en su modalidad model-based, prevalecen los algoritmos iterativos. Aunque los altos tiempos de reconstrucción aún son una barrera para aplicaciones prácticas, diferentes técnicas para la aceleración de su convergencia están siendo objeto de investigación, obteniendo resultados impresionantes. En esta tesis, se propone un algoritmo directo para la reconstrucción de imagen model-based. La aproximación model-based se basa en la construcción de una matriz modelo que plantea un sistema lineal cuya solución es la imagen reconstruida. El algoritmo propuesto consiste en la descomposición QR de esta matriz y la resolución del sistema por un proceso de sustitución regresiva. El coste de esta técnica de reconstrucción de imagen es un producto matriz vector y una sustitución regresiva, ya que la construcción del modelo y la descomposición QR se realizan una sola vez, debido a que cada reconstrucción de imagen supone la resolución del mismo sistema TAC para un término independiente diferente.
Durante la implementación de este algoritmo aparecen varios problemas, tales como el cálculo exacto del volumen de intersección, la definición de estrategias de reducción del relleno optimizadas para matrices de modelo de TAC, o el aprovechamiento de simetrías del TAC que reduzcan el tama\~no del sistema. Estos problemas han sido detallados y se han propuesto soluciones para superarlos, y como resultado, se ha obtenido una implementación de prueba de concepto.
Las imágenes reconstruidas han sido analizadas y comparadas frente a los algoritmos de reconstrucción filtered backprojection (FBP) y maximum likelihood expectation maximization (MLEM), y los resultados muestran varias ventajas del algoritmo propuesto. Aunque no se han podido obtener resoluciones altas aún, los resultados obtenidos también demuestran el futuro de este algoritmo, ya que se podrían obtener mejoras importantes en el rendimiento y la escalabilidad con el éxito en el desarrollo de mejores estrategias de reducción de relleno o simetrías en la geometría TAC. / [CA] En la reconstrucció de imatge tomografia axial computerizada (TAC) en la seua modalitat model-based prevaleixen els algorismes iteratius. Tot i que els alts temps de reconstrucció encara són un obstacle per a aplicacions pràctiques, diferents tècniques per a l'acceleració de la seua convergència estàn siguent objecte de investigació, obtenint resultats impressionants. En aquesta tesi, es proposa un algorisme direct per a la recconstrucció de image model-based. L'aproximació model-based es basa en la construcció d'una matriu model que planteja un sistema lineal quina sol·lució es la imatge reconstruida. L'algorisme propost consisteix en la descomposició QR d'aquesta matriu i la resolució del sistema per un procés de substitució regresiva. El cost d'aquesta tècnica de reconstrucció de imatge es un producte matriu vector i una substitució regresiva, ja que la construcció del model i la descomposició QR es realitzen una sola vegada, degut a que cada reconstrucció de imatge suposa la resolució del mateix sistema TAC per a un tèrme independent diferent.
Durant la implementació d'aquest algorisme sorgixen diferents problemes, tals com el càlcul exacte del volum de intersecció, la definició d'estratègies de reducció de farcit optimitzades per a matrius de model de TAC, o el aprofitament de simetries del TAC que redueixquen el tamany del sistema. Aquestos problemes han sigut detallats y s'han proposat solucions per a superar-los, i com a resultat, s'ha obtingut una implementació de prova de concepte.
Les imatges reconstruides han sigut analitzades i comparades front als algorismes de reconstrucció filtered backprojection (FBP) i maximum likelihood expectation maximization (MLEM), i els resultats mostren varies ventajes del algorisme propost. Encara que no s'han pogut obtindre resolucions altes ara per ara, els resultats obtinguts també demostren el futur d'aquest algorisme, ja que es prodrien obtindre millores importants en el rendiment i la escalabilitat amb l'éxit en el desemvolupament de millors estratègies de reducció de farcit o simetries en la geometria TAC. / Iborra Carreres, A. (2015). Development of a New 3D Reconstruction Algorithm for Computed Tomography (CT) [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59421
|
422 |
Cooperative Payload Transportation by UAVs: A Model-Based Deep Reinforcement Learning (MBDRL) ApplicationKhursheed, Shahwar Atiq 20 August 2024 (has links)
We propose a Model-Based Deep Reinforcement Learning (MBDRL) framework for collaborative paylaod transportation using Unmanned Aerial Vehicles (UAVs) in Search and Rescue (SAR) missions, enabling heavier payload conveyance while maintaining vehicle agility.
Our approach extends the single-drone application to a novel multi-drone one, using the Probabilistic Ensembles with Trajectory Sampling (PETS) algorithm to model the unknown stochastic system dynamics and uncertainty. We use the Multi-Agent Reinforcement Learning (MARL) framework via a centralized controller in a leader-follower configuration. The agents utilize the approximated transition function in a Model Predictive Controller (MPC) configured to maximize the reward function for waypoint navigation, while a position-based formation controller ensures stable flights of these physically linked UAVs. We also developed an Unreal Engine (UE) simulation connected to an offboard planner and controller via a Robot Operating System (ROS) framework that is transferable to real robots. This work achieves stable waypoint navigation in a stochastic environment with a sample efficiency following that seen in single UAV work.
This work has been funded by the National Science Foundation (NSF) under Award No.
2046770. / Master of Science / We apply the Model-Based Deep Reinforcement Learning (MBDRL) framework to the novel application of a UAV team transporting a suspended payload during Search and Rescue missions.
Collaborating UAVs can transport heavier payloads while staying agile, reducing the need for human involvement. We use the Probabilistic Ensemble with Trajectory Sampling (PETS) algorithm to model uncertainties and build on the previously used single UAVpayload system. By utilizing the Multi-Agent Reinforcement Learning (MARL) framework via a centralized controller, our UAVs learn to transport the payload to a desired position while maintaining stable flight through effective cooperation. We also develop a simulation in Unreal Engine (UE) connected to a controller using a Robot Operating System (ROS) architecture, which can be transferred to real robots. Our method achieves stable navigation in unpredictable environments while maintaining the sample efficiency observed in single UAV scenarios.
|
423 |
Model-Based Design of a Plug-In Hybrid Electric Vehicle Control StrategyKing, Jonathan Charles 27 September 2012 (has links)
For years the trend in the automotive industry has been toward more complex electronic control systems. The number of electronic control units (ECUs) in vehicles is ever increasing as is the complexity of communication networks among the ECUs. Increasing fuel economy standards and the increasing cost of fuel is driving hybridization and electrification of the automobile. Achieving superior fuel economy with a hybrid powertrain requires an effective and optimized control system. On the other hand, mathematical modeling and simulation tools have become extremely advanced and have turned simulation into a powerful design tool. The combination of increasing control system complexity and simulation technology has led to an industry wide trend toward model based control design. Rather than using models to analyze and validate real world testing data, simulation is now the primary tool used in the design process long before real world testing is possible. Modeling is used in every step from architecture selection to control system validation before on-road testing begins.
The Hybrid Electric Vehicle Team (HEVT) of Virginia Tech is participating in the 2011-2014 EcoCAR 2 competition in which the team is tasked with re-engineering the powertrain of a GM donated vehicle. The primary goals of the competition are to reduce well to wheels (WTW) petroleum energy use (PEU) and reduce WTW greenhouse gas (GHG) and criteria emissions while maintaining performance, safety, and consumer acceptability. This paper will present systematic methodology for using model based design techniques for architecture selection, control system design, control strategy optimization, and controller validation to meet the goals of the competition. Simple energy management and efficiency analysis will form the primary basis of architecture selection. Using a novel method, a series-parallel powertrain architecture is selected. The control system architecture and requirements is defined using a systematic approach based around the interactions between control units. Vehicle communication networks are designed to facilitate efficient data flow. Software-in-the-loop (SIL) simulation with Mathworks Simulink is used to refine a control strategy to maximize fuel economy. Finally hardware-in-the-loop (HIL) testing on a dSPACE HIL simulator is demonstrated for performance improvements, as well as for safety critical controller validation. The end product of this design study is a control system that has reached a high level of parameter optimization and validation ready for on-road testing in a vehicle. / Master of Science
|
424 |
An overview of fault tree analysis and its application in model based dependability analysisKabir, Sohag 18 October 2019 (has links)
Yes / Fault Tree Analysis (FTA) is a well-established and well-understood technique, widely used for
dependability evaluation of a wide range of systems. Although many extensions of fault trees have been proposed, they
suffer from a variety of shortcomings. In particular, even where software tool support exists, these analyses require a lot
of manual effort. Over the past two decades, research has focused on simplifying dependability analysis by looking at
how we can synthesise dependability information from system models automatically. This has led to the field of model-based dependability analysis (MBDA). Different tools and techniques have been developed as part of MBDA to
automate the generation of dependability analysis artefacts such as fault trees. Firstly, this paper reviews the standard
fault tree with its limitations. Secondly, different extensions of standard fault trees are reviewed. Thirdly, this paper
reviews a number of prominent MBDA techniques where fault trees are used as a means for system dependability
analysis and provides an insight into their working mechanism, applicability, strengths and challenges. Finally, the
future outlook for MBDA is outlined, which includes the prospect of developing expert and intelligent systems for
dependability analysis of complex open systems under the conditions of uncertainty.
|
425 |
<b>VERIFICATION AND VALIDATION OF AN AI-ENABLED SYSTEM</b>Ibukun Phillips (6622694) 11 November 2024 (has links)
<p dir="ltr">Recent advancements in Machine Learning (ML) algorithms and increasing computational power have driven significant progress in Artificial Intelligence (AI) systems, especially those that leverage ML techniques. These AI-enabled systems incorporate components and data designed to simulate learning and problem-solving, distinguishing them from traditional systems. Despite their widespread application across various industries, certifying AI systems through verification and validation remains a formidable challenge. This difficulty primarily arises from the probabilistic nature of AI and ML components, which leads to unpredictable behaviors.</p><p dir="ltr">This dissertation investigates the verification and validation aspects within the Systems Engineering (SE) lifecycle, utilizing established frameworks and methodologies that support system realization from inception to retirement. It is comprised of three studies focused on applying formal methods, particularly model checking, to enhance the accuracy, value, and trustworthiness of models of engineered systems that use digital twins for modeling the system. The research analyzes digital twin data to understand physical asset behavior more thoroughly by applying both an exploratory method, system identification, and a confirmatory technique, machine learning. This dual approach not only aids in uncovering unknown system dynamics but also enhances the validation process, contributing to a more robust modeling framework.</p><p dir="ltr">The findings provide significant insights into the model-based design of AI-enabled digital twins, equipping systems engineers<del>,</del> and researchers with methods for effectively designing, simulating and modeling complex systems. Ultimately, this work aims to bridge the certification gap in AI-enabled technologies, thereby increasing public trust and facilitating the broader adoption of these innovative systems.</p>
|
426 |
MBSE–gestützte Bewertung von technischen Änderungsauswirkungen im Modell der SGE – SystemgenerationsentwicklungMartin, Alex, Lützelschwab, Jannis, Clermont, Vanessa Michelle, Albers, Albert 09 October 2024 (has links)
Technische Änderungen sind im Produktentstehungsprozess allgegenwärtig und beanspruchen einen hohen Anteil an vorhandenen Forschungs- und Entwicklungskosten.
Gleichzeitig wird ca. ein Drittel aller technischen Änderungen aufgrund der hohen Systemkomplexität als kritisch eingestuft. Für den Umgang mit hoher Systemkomplexität
werden Ansätze des Model-Based Systems Engineering (MBSE) als vielversprechend gesehen. MBSE stellt einen formalisierten Ansatz zur Erstellung und Analyse von
domänenübergreifenden Systemmodellen dar, die u.a. in den Bereichen Anforderungsmanagement, Verifikation und Validierung sowie Analyse und Synthese unterstützen können. Die Entwicklung von technischen Systemen erfolgt in Generationen. Mit Hilfe des Beschreibungsmodells der Systemgenerationsentwicklung - SGE nach ALBERS
können u.a. technische und ökonomische Risiken von technischen Änderungen auf Basis der Herkunft von Referenzsystemelementen sowie der Übernahme- und
Neuentwicklungsanteilen eingeschätzt werden. Es existieren zudem viele weitere Ansätze im technischen Änderungsmanagement, die sich auf einzelne Aspekte der Ausbreitungs- und Auswirkungsanalyse fokussieren. Die modellbasierte Methodik AECIA - Advanced Engineering Change Impact Approach bietet eine ganzheitliche Unterstützung im technischen Änderungsmanagement. Erste Veröffentlichungen untersuchen insbesondere Aktivitäten zur Prüfung der Validität sowie die Modellierung und Analyse der Ausbreitung von technischen Änderungen. Ziel dieser Veröffentlichung ist es die AECIA-Methodik um ein Vorgehen zur Bewertung von technischen Änderungen zu erweitern und dieses am Beispiel einer Sondermaschine anzuwenden und zu evaluieren.
|
427 |
System identification and model-based control of a filter cake drying processWiese, Johannes Jacobus 03 1900 (has links)
Thesis (MScEng (Process Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: A mineral concentrate drying process consisting of a hot gas generator, a flash dryer and a feeding section is found to be the bottleneck in the platinum concentrate smelting process. This operation is used as a case study for system identification and model-based control of dryers. Based on the availability of a month's worth of dryer data obtained from a historian, a third party modelling and control software vendor is interested in the use of this data for data driven model construction and options for dryer control. The aimed contribution of this research is to use only data driven techniques and attempt an SID experiment and use of this model in a controller found in literature to be applicable to the dryer process. No first principle model was available for simulation or interpretation of results. Data were obtained for the operation from the plant historian, reduced, cleaned and investigated for deterministic information through surrogate data comparison – resulting in usable timeseries from the plant data. The best datasets were used for modelling of the flash dryer and hot gas generator operations individually, with the hot gas generator providing usable results. The dynamic, nonlinear autoregressive models with exogenous inputs were identified by means of a genetic programming with orthogonal least squares toolbox. The timeseries were reconstructed as a latent variable set, or “pseudo-embedding”, using the delay parameters as identified by average mutual information, autocorrelation and false nearest neighbours. The latent variable reconstruction resulted in a large solution space, which need to be investigated for an unknown model structure. Genetic Programming is capable of identifying unknown structures. Freerun prediction stability and sensitivity analysis were used to assess the identified best models for use in model based control. The best two models for the hot gas generator were used in a basic model predictive controller in an attempt to only track set point changes.
One step ahead modelling of the flash dryer outlet air temperature was unsuccessful with the best model obtaining a validation R2 = 43%. The lack of process information
contained in the available process variables are to blame for the poor model identification. One-step ahead prediction of the hot gas generator resulted in a top model with validation R2 = 77.1%. The best two hot gas generator models were implemented in a model predictive controller constructed in a real time plant data flow simulation. This controller's performance was measured against set point tracking ability. The MPC implementation was unsuccessful due to the poor freerun prediction ability of the models. The controller was found to be unable to optimise the control moves using the model. This is assigned to poor model freerun prediction ability in one of the models and a too complex freerun model structure required. It is expected that the number of degrees of freedom in the freerun model is too much for the optimiser to handle. A successful real time simulation architecture for the plant dataflow could however be constructed in the supplied software. It is recommended that further process measurements, specifically feed moisture content, feed temperature and air humidity, be included for the flash dryer; closed loop system identification be investigated for the hot gas generator; and a simpler model structure with smaller reconstructed latent variable regressor set be used for the model predictive controller. / AFRIKAANSE OPSOMMING: 'n Drogings proses vir mineraal konsentraat bestaan uit drie eenhede: 'n lug verwarmer-, 'n blitsdroeër- en konsentraat toevoer eenheid. Hierdie droeër is geïdentifiseer as die bottelnek in die platinum konsentraat smeltingsproses. Die droeër word gebruik as 'n gevallestudie vir sisteem identifikasie asook model-gebasseerder beheer van droeërs. 'n Maand se data verkry vanaf die proses databasis, het gelei tot 'n derde party industriële sagteware en beheerstelsel maatskappy se belangstelling in data gedrewe modelering en beheer opsies vir die drogings proses. Die doelwit van hierdie studie is om data gedrewe modeleringstegnieke te gebruik en die model in 'n droeër-literatuur relevante beheerder te gebruik. Geen eerste beginsel model is beskikbaar vir simulasie of interpretasie van resultate nie. Die verkrygde data is gereduseer, skoon gemaak en bestudeer om te identifiseer of die tydreeks deterministiese inligting bevat. Dit is gedoen deur die tydreeks met stochastiese surrogaat data te vergelyk. Die mees gepaste datastelle is gebruik vir modellering van die blitsdroeër en lugverwarmer afsonderlik. Die nie-liniêre, dinamiese nie-linieêre outeregressie modelle met eksogene insette was deur 'n genetiese programmering algoritme, met ortogonale minimum kwadrate, identifiseer. Die betrokke tydreeks is omskep in 'n hulp-veranderlike stel deur gebruik te maak van vertragings-parameters wat deur gemiddelde gemeenskaplike inligting, outokorrelasie en vals naaste buurman metodes verkry is. Die GP algoritme is daartoe in staat om the groot oplossings ruimte wat deur hierdie hulp-veranderlike rekonstruksie geskep word, te bestudeer vir 'n onbekende model struktuur. Die vrye vooruitskattings vermoë, asook die model sensitiwiteit is inag geneem tydens die analiese van die resultate. Die beste modelle se gepastheid tot model voorspellende beheer is gemeet deur die uitkomste van 'n sensitiwiteits analise, asook 'n vrylopende voorspelling, in oënskou te neem.
Die een-stap vooruit voorspellende model van die droeër was onsusksesvol met die beste model wat slegs 'n validasie R2 = 43% kon behaal. Die gebrekkige meet
instrumente in die droeër is te blameer vir die swak resultate. Die een-stap vooruit voorspellende model van die lug verwarmer wat die beste gevaar het, het 'n validasie R2 = 77.1% gehad. 'n Basiese model voorspellende beheerder is gebou deur die 2 beste modelle van slegs die lugverwarmer te gebruik in 'n intydse simulasie van die raffinadery data vloei struktuur. Hierdie beheerder se vermoë om toepaslike beheer uit te oefen, is gemeet deur die slegs die stelpunt te verander. Die beheerder was egter nie daartoe in staat om die insette te optimeer, en so die stelpunt te volg nie. Hierdie onvermoë is as gevolg van die kompleks vrylopende model struktuur wat oor die voorspellingsvenster optimeer moet word, asook die onstabiele vryvooruitspellings vermoë van die modelle. Die vermoede is dat die loslopende voorspelling te veel vryheids grade het om die insette maklik genoeg te optimeer. Die intydse simulasie van die raffinadery se datavloei struktuur was egter suksesvol. Beter meting van noodsaaklike veranderlikes vir die droër, o.a. voginhoud van die voer, voer temperatuur, asook lug humiditeit; geslotelus sisteem identifikasie vir die lugverwarmer; asook meer eenvoudige model struktuur vir gebruik in voorspellende beheer moontlik vermag deur 'n kleiner hulp veranderlike rekonstruksie te gebruik.
|
428 |
Efficient adaptive sampling applied to multivariate, multiple output rational interpolation models, with applications in electromagnetics-based device modellingLehmensiek, Robert 12 1900 (has links)
Thesis (PhD) -- Stellenbosch University, 2001. / ENGLISH ABSTRACT: A robust and efficient adaptive sampling algorithm for multivariate, multiple output rational
interpolation models, based on convergents of Thiele-type branched continued fractions, is
presented. A variation of the standard branched continued fraction method is proposed that uses
approximation to establish a non-rectangular grid of support points. Starting with a low order
interpolant, the technique systematically increases the order by optimally choosing new support
points in the areas of highest error, until the desired accuracy is achieved. In this way, accurate
surrogate models are established by a small number of support points, without assuming any a
priori knowledge of the microwave structure under study. The technique is illustrated and
evaluated on several passive microwave structures, however it is general enough to be applied to
many modelling problems. / AFRIKAANSE OPSOMMING: 'n Robuuste en effektiewe aanpasbare monsternemingsalgoritme vir multi-veranderlike, multi-uittree
rasionale interpolasiemodelle, gegrond op konvergente van Thiele vertakte volgehoue
breukuitbreidings, word beskryf. 'n Variasie op die konvensionele breukuitbreidingsmetode word
voorgestel, wat 'n nie-reghoekige rooster van ondersteuningspunte gebruik in die
funksiebenadering. Met 'n lae orde interpolant as beginpunt, verhoog die algoritme stelselmatig die
orde van die interpolant deur optimaal verbeterde ondersteuningspunte te kies waar die grootste fout
voorkom, totdat die gewensde akuraatheid bereik word. Hierdeur word akkurate surrogaat modelle
opgebou ten spyte van min inisiele ondersteuningspunte, asook sonder voorkennis van die
mikrogolfstruktuur ter sprake. Die algoritme word gedemonstreer en geevalueer op verskeie
passiewe mikrogolfstrukture, maar is veelsydig genoeg om toepassing te vind in meer algemene
modelleringsprobleme.
|
429 |
Optimisation of a hollow fibre membrane bioreactor for water reuseVerrecht, Bart January 2010 (has links)
Over the last two decades, implementation of membrane bioreactors (MBRs) has increased due to their superior effluent quality and low plant footprint. However, they are still viewed as a high-cost option, both with regards to capital and operating expenditure (capex and opex). The present thesis extends the understanding of the impact of design and operational parameters of membrane bioreactors on energy demand, and ultimately whole life cost. A simple heuristic aeration model based on a general algorithm for flux vs. aeration shows the benefits of adjusting the membrane aeration intensity to the hydraulic load. It is experimentally demonstrated that a lower aeration demand is required for sustainable operation when comparing 10:30 to continuous aeration, with associated energy savings of up to 75%, without being penalised in terms of the fouling rate. The applicability of activated sludge modelling (ASM) to MBRs is verified on a community-scale MBR, resulting in accurate predictions of the dynamic nutrient profile. Lastly, a methodology is proposed to optimise the energy consumption by linking the biological model with empirical correlations for energy demand, taking into account of the impact of high MLSS concentrations on oxygen transfer. The determining factors for costing of MBRs differ significantly depending on the size of the plant. Operational cost reduction in small MBRs relies on process robustness with minimal manual intervention to suppress labour costs, while energy consumption, mainly for aeration, is the major contributor to opex for a large MBR. A cost sensitivity analysis shows that other main factors influencing the cost of a large MBR, both in terms of capex and opex, are membrane costs and replacement interval, future trends in energy prices, sustainable flux, and the average plant utilisation which depends on the amount of contingency built in to cope with changes in the feed flow.
|
430 |
Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining ChallengesPersson, Magnus January 2009 (has links)
<p>Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis.</p> / <p>Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen.</p> / DySCAS
|
Page generated in 0.0559 seconds