• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 12
  • 9
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 163
  • 163
  • 106
  • 43
  • 42
  • 32
  • 29
  • 21
  • 21
  • 20
  • 19
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Identification of Improvement areas in Servitization within European Space Exploration : A multi-stakeholder case study of challenges in servitization / Identifiering av förbättringsområden inom tjänstefiering i Europeisk rymdutforskning: En case studie av utmaningar inom tjänstefiering

Malmberg, Jonathan January 2023 (has links)
The space industry is currently undergoing a significant servitization-shift as space agencies globally are transitioning from the government-led product-oriented procurement approach, that has been the standard for decades, to a more commercial service-oriented procurement approach. The purpose of the thesis is to identify what challenges exists within servitization in European space exploration and translate these into related improvement areas for the service-oriented procurement approach adopted by the European Space Agency (ESA). The thesis adopts a qualitative case-study approach where four different commercial services, developed through a commercial partnership between ESA and private enterprises, are being studied. In total, the study identifies 21 challenges across three different life-cycle stages for the commercial services. First, the study identifies cultural challenges for both the space agency and industry as they struggle in transitioning to a service culture from the existing culture that is strongly linked with the traditional approach. Second, the study also identifies several challenges related to how the processes established within the frame of the commercial partnership are currently inadequate to support the transition to commercial services. In particular, the study highlights knowledge gaps related to business planning and marketing, insufficient processes to ensure a balance in cost and quality incentives and high barriers of entry for SMEs. Finally, the study identifies relational challenges with regards to the collaboration between the space agency and the commercial partner. The results indicate that the collaboration between ESA and the commercial partners currently lack the necessary transparency and efficiency in collaboration needed to succeed with servitization. In order to resolve these challenges, the study proposes 21 different improvement areas for ESA in relation to its commercialisation initiative. In particular, the thesis highlights process improvements related to the choice of procurement approach, development of business plans, evaluation of upfront commitment to utilization and visibility into the service design. The thesis concludes by highlighting the need for continued work with development of improvements. The thesis results serve as a starting point for developing a future approach of planning and managing development of commercial services within space exploration. / Rymdindustrin genomgår för närvarande en betydande tjänstefiering där rymdorganisationer globalt övergår från en statligt styrd produkt-orienterad tjänste-orienterad upphandlingsmetod. Syftet med examensarbetet är att identifiera vilkautmaningar som finns inom tjänstefiering i europeisk rymdutforskning samt vilka relaterade förbättringsområden som följaktligen finns inom den tjänste-orienterade upphandlingsmetod som European Space Agency (ESA) har antagit. Examensarbetet baseras på en fallstudie där fyra olika kommersiella tjänster, utvecklade genom ett kommersiellt partnerskap mellan ESA och privata företag, studeras. Studien identifierar totalt sett 21 utmaningar över tre olika livscykelfaser för de kommersiella tjänsterna. För det första identifierar studien kulturella utmaningar för både rymdorganisationen och industrin upplever svårigheter i att övergå från den befintliga kulturen som starkt är kopplad till den traditionella metoden till en tjänste-orienterad kultur. För det andra identifierar studien även flera utmaningar relaterade till hur processerna som etablerats inom ramen för det kommersiella partnerskapet för närvarande är otillräckliga för att stödja övergången till kommersiella tjänster. Studien lyfter särskilt fram kunskapsluckor inom affärsplanering och marknadsföring, otillräckliga processer för att säkerställa balans mellan kostnads- och kvalitetsincitament samt höga inträdeshinder för små och medelstora företag. Slutligen identifierar studien relationsmässiga utmaningar med avseende på samarbetet mellan rymdorganisationen och den kommersiella partnern. Resultaten indikerar att samarbetet mellan ESA och industrin idag saknar den nödvändiga transparensen och effektiviteten i samarbetet som krävs för att lyckas med tjänstefiering. För att lösa dessa utmaningar föreslår studien 21 olika förbättringsområden för ESA i relation till dess kommersialiseringsinitiativ. Särskilt framhävs processförbättringar relaterade till val av upphandlingsmetod, utveckling av affärsplaner, utvärdering av tidiga åtaganden för utnyttjande och insyn i tjänstedesignen. Examensarbetet avslutas med att betona behovet av fortsatt arbete med utveckling av förbättringar. Resultaten utgör en startpunkt för att utveckla en framtida strategi för planering och hantering av utvecklingen av kommersiella tjänster inom rymdutforskning.
152

Laboratory starlight simulator for future space-based heterodyne interferometry

Karlsson, William January 2023 (has links)
In astronomy, interferometry by ground-based telescopes offers the greatest angular resolution. However, the Earth´s atmosphere distorts the incident wavefront from a celestial object, leading to blurring and signal loss. It also restricts the transmission of specific wavelengths within the electromagnetic spectrum. Space-based interferometers would mitigate atmospheric obstruction and potentially enable even higher angular resolutions. The main challenge of implementing space-based interferometry is the necessity of matching the light´s optical path differences at the telescopes within the coherence length of the light utilizing physical delay lines. This thesis explores the potential realization of digital delay lines via heterodyne interferometry. The technique generates a heterodyne beat note at the frequency difference between the incident stellar light and a reference laser in the radio regime, permitting digitization of the delay line while preserving the phase information for image reconstruction. The primary objective of the thesis is to advance the field of astronomy by constructing a testbed environment for investigating future space-based heterodyne interferometry in the NIR light range. It requires the achievement of two main tasks. Firstly, a laboratory starlight simulator is developed to simulate a distant star´s wavefront appearance as it reaches telescopes on or around Earth. The consequent starlight simulator contains an optical assembly that manifests a point source in NIR light, aligned with a mirror collimator’s focal point, transforming the wavefront from spherical to planar. Secondly, a fiber optical circuit with interference capability is constructed, consisting of a free-space optical delay line and a polarization-controlled custom-sized fiber. The delay line matches the optical paths within the light's coherence length, while the polarization controller optimizes interference visibility. The completion of the tasks establishes the foundation to investigate space-based heterodyne interferometry in the NIR light with the potential implementation of delay line digitization.
153

Basil-GAN / Basilika-GAN

Risberg, Jonatan January 2022 (has links)
Developments in computer vision has sought to design deep neural networks which trained on a large set of images are able to generate high quality artificial images which share semantic qualities with the original image set. A pivotal shift was made with the introduction of the generative adversarial network (GAN) by Goodfellow et al.. Building on the work by Goodfellow more advanced models using the same idea have shown great improvements in terms of both image quality and data diversity. GAN models generate images by feeding samples from a vector space into a generative neural network. The structure of these so called latent vector samples show to correspond to semantic similarities of their corresponding generated images. In this thesis the DCGAN model is trained on a novel data set consisting of image sequences of the growth process of basil plants from germination to harvest. We evaluate the trained model by comparing the DCGAN performance on benchmark data sets such as MNIST and CIFAR10 and conclude that the model trained on the basil plant data set achieved similar results compared to the MNIST data set and better results in comparison to the CIFAR10 data set. To argue for the potential of using more advanced GAN models we compare the results from the DCGAN model with the contemporary StyleGAN2 model. We also investigate the latent vector space produced by the DCGAN model and confirm that in accordance with previous research, namely that the DCGAN model is able to generate a latent space with data specific semantic structures. For the DCGAN model trained on the data set of basil plants, the latent space is able to distinguish between images of early stage basil plants from late stage plants in the growth phase. Furthermore, utilizing the sequential semantics of the basil plant data set, an attempt at generating an artificial growth sequence is made using linear interpolation. Finally we present an unsuccessful attempt at visualising the latent space produced by the DCGAN model using a rudimentary approach at inverting the generator network function. / Utvecklingen inom datorseende har syftat till att utforma djupa neurala nätverk som tränas på en stor mängd bilder och kan generera konstgjorda bilder av hög kvalitet med samma semantiska egenskaper som de ursprungliga bilderna. Ett avgörande skifte skedde när Goodfellow et al. introducerade det generativa adversariella nätverket (GAN). Med utgångspunkt i Goodfellows arbete har flera mer avancerade modeller som använder samma idé uppvisat stora förbättringar när det gäller både bildkvalitet och datamångfald. GAN-modeller genererar bilder genom att mata in vektorer från ett vektorrum till ett generativt neuralt nätverk. Strukturen hos dessa så kallade latenta vektorer visar sig motsvara semantiska likheter mellan motsvarande genererade bilder. I detta examensarbete har DCGAN-modellen tränats på en ny datamängd som består av bildsekvenser av basilikaplantors tillväxtprocess från groning till skörd. Vi utvärderar den tränade modellen genom att jämföra DCGAN-modellen mot referensdataset som MNIST och CIFAR10 och drar slutsatsen att DCGAN tränad på datasetet för basilikaväxter uppnår liknande resultat jämfört med MNIST-dataset och bättre resultat jämfört med CIFAR10-datasetet. För att påvisa potentialen av att använda mer avancerade GAN-modeller jämförs resultaten från DCGAN-modellen med den mer avancerade StyleGAN2-modellen. Vi undersöker också det latenta vektorrum som produceras av DCGAN-modellen och bekräftar att DCGAN-modellen i enlighet med tidigare forskning kan generera ett latent rum med dataspecifika semantiska strukturer. För DCGAN-modellen som tränats på datamängden med basilikaplantor lyckas det latenta rummet skilja mellan bilder av basilikaplantor i tidiga stadier och sena stadier av plantor i tillväxtprocessen. Med hjälp av den sekventiella semantiken i datamängden för basilikaväxter gjörs dessutom ett försök att generera en artificiell tillväxtsekvens med hjälp av linjär interpolation. Slutligen presenterar vi ett misslyckat försök att visualisera det latenta rummet som produceras av DCGAN-modellen med hjälp av ett rudimentärt tillvägagångssätt för att invertera den generativa nätverksfunktionen.
154

Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

Tuzov, Ilya 25 January 2021 (has links)
[ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7. / [CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7. / [EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA. / Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883
155

Development of methodologies for memory management and design space exploration of SW/HW computer architectures for designing embedded systems / Ανάπτυξη μεθοδολογιών διαχείρισης μνήμης και εξερεύνησης σχεδιασμών σε αρχιτεκτονικές υπολογιστών υλικού/λογισμικού για σχεδίαση ενσωματωμένων συστημάτων

Κρητικάκου, Αγγελική 16 May 2014 (has links)
This PhD dissertation proposes innovative methodologies to support the designing and the mapping process of embedded systems. Due to the increasing requirements, embedded systems have become quite complex, as they consist of several partially dependent heterogeneous components. Systematic Design Space Exploration (DSE) methodologies are required to support the near-optimal design of embedded systems within the available short time-to-market. In this target domain, the existing DSE approaches either require too much exploration time to find near-optimal designs due to the high number of parameters and the correlations between the parameters of the target domain, or they end up with a less efficient trade-off result in order to find a design within acceptable time. In this dissertation we present an alternative DSE methodology, which is based on systematic creation of scalable and near-optimal DSE frameworks. The frameworks describe all the available options of the exploration space in a finite set of classes. A set of principles is presented which is used in the reusable DSE methodology to create a scalable and near-optimal framework and to efficiently use it to derive scalable and near-optimal design solutions within a Pareto trade-off space. The DSE reusable methodology is applied to several stages of the embedded system design flow to derive scalable and near-optimal methodologies. The first part of the dissertation is dedicated to the development of mapping methodologies for storing large embedded system data arrays in the lower layers of the on-chip background data memory hierarchy, and the second part to the DSE methodologies for the processing part of SW/HW architectures in embedded systems including the foreground memory systems. Existing mapping approaches for the background memory part are either enumerative, symbolic/polyhedral and worst case (heuristics) approximations. The enumerative approaches require too much exploration time, the worst case approximation lead to overestimation of the storage requirements, whereas the symbolic/polytope approaches are scalable and near-optimal for solid and regular iteration spaces. By applying the new reusable DSE methodology, we have developed an intra-signal in-place optimization methodology which is scalable and near-optimal for highly irregular access schemes. Scalable and near-optimal solutions for the different cases of the proposed methodology have been developed for the cases of non-overlapping and overlapping store and load access schemes. To support the proposed methodology, a new representation of the array access schemes, which is appropriate to express the irregular shapes in a scalable and near-optimal way, is presented. A general pattern formulation has been proposed which describes the access scheme in a compact and repetitive way. Pattern operations were developed to combine the patterns in a scalable and near-optimal way under all the potential pattern combination cases, which may exist in the application under study. In the processing oriented part of the dissertation, a DSE methodology is developed for mapping instance of a predefined target application domain onto a partially fixed architecture platform template, which consists of one processor core and several custom hardware accelerators. The DSE methodology consists of uni-directional steps, which are implemented through parametric templates and are applied without costly design iterations. The proposed DSE methodology explores the space by instantiating the steps and propagating design constraints which prune design options following the steps ordering. The result is a final Pareto trade-off curve with the most relevant near-optimal designs. As the scheduling and the assignment are the major tasks of both the foreground and the datapath, near-optimal and scalable techniques are required to support the parametric templates of the proposed DSE methodology. A framework which describes the scheduling and assignment of the scalars into the registers and the scheduling and assignment of the operation into the function units of the data path is developed. Based on the framework, a systematic methodology to arrive at parametric templates for scheduling and assignment techniques which satisfy the target domain constraints is developed. In this way, a scalable parametric template for scheduling and assignment tasks is created, which guarantees near-optimality for the domain under study. The developed template can be used in the Foreground Memory Management step and Data-path mapping step of the overall design flow. For the DSE of the domain under study, near-optimal results are hence achieved through a truly scalable technique. / Η παρούσα διδακτορική διατριβή προτείνει καινοτόμες μεθοδολογίες για τον σχεδιασμό και τη διαδικασία απεικόνισης σε ενσωματωμένα συστημάτα. Λόγω των αυξανόμενων απαιτήσεων, τα ενσωματωμένα συστήματα είναι αρκετά περίπλοκα, καθώς αποτελούνται από πολλά και εν μέρει εξαρτώμενα ετερογενή στοιχεία. Συστηματικές μεθοδολογίες για την εξερεύνηση του χώρου λύσεων (Design Space Exploration – DSE) απαιτούνται σχεδόν βέλτιστες σχεδιάσεις ενσωματωμένων συστημάτων εντός του διαθέσιμου χρονου. Οι υπάρχουσες DSE μεθοδολογίες απαιτούν είτε πάρα πολύ χρόνο εξερεύνησης για να βρουν τους σχεδόν βέλτιστους σχεδιασμούς, λόγω του μεγάλου αριθμού των παραμέτρων και τις συσχετίσεις μεταξύ των παραμέτρων, ή καταλήγουν με ένα λιγότερο βέλτιστο σχέδιο, προκειμένου να βρειθεί ένας σχεδιασμός εντός του διαθέσιμου χρόνου. Στην παρούσα διδακτορική διατριβή παρουσιάζουμε μια εναλλακτική DSE μεθοδολογία, η οποία βασίζεται στη συστηματική δημιουργία επεκτάσιμων και σχεδόν βέλτιστων DSE πλαισίων. Τα πλαίσια περιγράφουν όλες τις διαθέσιμες επιλογές στο χώρο εξερεύνησης με ένα πεπερασμένο σύνολο κατηγοριών. Ένα σύνολο αρχών χρησιμοποιείται στην επαναχρησιμοποιήούμενη DSE μεθοδολογία για να δημιουργήσει ένα επεκτάσιμο και σχεδόν βέλτιστο DSE πλαίσιο και να χρησιμοποιήθεί αποτελεσματικά για να δημιουργήσει επεκτάσιμες και σχεδόν βέλτιστες σχεδιαστικές λύσεις σε ένα Pareto Trade-off χώρο λύσεων. Η DSE μεθοδολογία εφαρμόζεται διάφορα στάδια της σχεδιαστικής ροής για ενσωματωμένα συστήματα και να δημιουργήσει επεκτάσιμες και σχεδόν βέλτιστες μεθοδολογίες. Το πρώτο μέρος της διατριβής είναι αφιερωμένο στην ανάπτυξη των μεθόδων απεικόνισης για την αποθήκευση μεγάλων πινάκων που χρησιμοποιούνται στα ενσωματωμένα συστήματα και αποθηκεύονται στα χαμηλότερα στρώματα της on-chip Background ιεραρχία μνήμης. Το δεύτερο μέρος είναι αφιερωμένο σε DSE μεθοδολογίες για το τμήμα επεξεργασίας σε αρχιτεκτονικές λογισμικού/υλικού σε ενσωματωμένα συστήματα, συμπεριλαμβανομένων των συστημάτων της προσκήνιας (foreground) μνήμης. Υπάρχουσες μεθοδολογίες απεικόνισης για την Background μνήμης είτε εξονυχιστικές, συμβολικές/πολυεδρικές και προσεγγίσεις με βάση τη χειρότερη περίπτωση. Οι εξονυχιστικές απαιτούν πάρα πολύ μεγάλο χρόνο εξερεύνησης, οι προσεγγίσεις οδηγούν σε υπερεκτίμηση των απαιτήσεων αποθήκευσης, ενώ οι συμβολικές είναι επεκτάσιμη και σχεδόν βέλτιστές μονο για τακτικούς χώρους επαναλήψεων. Με την εφαρμογή της προτεινόμενης DSE μεθοδολογίας αναπτύχθηκε μια επεκτάσιμη και σχεδόν βέλτιστη μεθοδολγοία για την εύρεση του αποθηκευτικού μεγέθους για τα δεδομένα ενός πίνακα για άτακτους και για τακτικούς χώρους επαναλήψεων. Προτάθηκε μια νέα αναπαράσταση των προσπελάσεων στη μνήμη, η οποία εκφράζει τα ακανόνιστα σχήματα στο χώρο επεναλήψεων με επακτάσιμο και σχεδόν βέλτιστο τρόπο. Στο δεύτερο τμήμα της διατριβής, μια DSE μεθοδολογία αναπτύχθηκε για το σχεδιασμό ενός προκαθορισμένου τομέα από εφαρμογές σε μια μερικώς αποφασισμένη αρχιτεκτονική πλατφόρμα, η οποία αποτελείται από ένα πυρήνα επεξεργαστή και αρκετούς συνεπεξεργαστές. Η DSE μεθοδολογία αποτελείται από μονής κατεύθυνσης βήματα, τα οποία υλοποιούνται μέσω παραμετρικών πλαισίων και εφαρμόζονται αποφέυγοντας τις δαπανηρές επαναλήψεις κατά τον σχεδιασμό. Η προτεινόμενη DSE μεθοδολογία εξερευνά το χώρο βρίσκοντας στιγμιότυπα για καθε βήμα και διαδίδονατς τις αποφάσεις μεταξύ βημάτων. Με αυτό το τρόπο κλαδεύουν τις επιλογές σχεδιασμού στα επόμενα βήματα. Το αποτέλεσμα είναι μια Pareto καμπύλη. Ένα DSE πλαίσιο προτάθηκε που περιγράφει τις τεχνικές χρονοπρογραμματισμού και ανάθεσης πόρων των καταχωρητών και των μονάδων εκτέλεσης του συστήματος. Προτάθηκε μια μεθοδολογία για να δημιουργεί σχεδόν βέλτιστα και επεκτάσιμα παραμετρικά πρότυπα για τον χρονοπρογραμματισμό και την ανάθεση πόρων που ικανοποιεί τους περιορισμούς ενός τομέα εφαρμογών.
156

Techniques d'analyse et d'optimisation pour la synthèse architecturale de systèmes temps réel embarqués distribués : problèmes de placement, de partitionnement et d'ordonnancement / Analysis and optimization techniques for the architectural synthesis of real time embedded and distributed systems

Mehiaoui, Asma 16 June 2014 (has links)
Dans le cadre industriel et académique, les méthodologies de développement logiciel exploitent de plus en plus le concept de “modèle” afin d’appréhender la complexité des systèmes temps réel critiques. En particulier, celles-ci définissent une étape dans laquelle un modèle fonctionnel, conçu comme un graphe de blocs fonctionnels communiquant via des échanges de signaux de données, est déployé sur un modèle de plateforme d’exécution matérielle et un modèle de plateforme d’exécution logicielle composé de tâches et de messages. Cette étape appelée étape de déploiement, permet d’établir une architecture opérationnelle du système nécessitant une validation des propriétés temporelles du système. Dans le contexte des systèmes temps réel dirigés par les évènements, la vérification des propriétés temporelles est réalisée à l’aide de l’analyse d’ordonnançabilité basée sur l’analyse des temps de réponse. Chaque choix de déploiement effectué a un impact essentiel sur la validité et la qualité du système. Néanmoins, les méthodologies existantes n’offrent pas de support permettant de guider le concepteur d’applications durant l’exploration de l’espace des architectures possibles. L’objectif de ces travaux de thèse consiste à mettre en place des techniques d’analyse et de synthèse automatiques permettant de guider le concepteur vers une architecture opérationnelle valide et optimisée par rapport aux performances du système. Notre proposition est dédiée à l’exploration de l’espace des architectures en tenant compte à la fois des quatre degrés de liberté déterminés durant la phase de déploiement, à savoir (j) le placement des éléments fonctionnels sur les éléments de calcul et de communication de la plateforme d’exécution, (ii) le partitionnement des éléments fonctionnels en tâches temps réel et des signaux de données en messages, (iii) l’affectation de priorités d’exécution aux tâches et aux messages du système et (iv) l’attribution du mécanisme de protection des données partagées pour les systèmes temps réel périodiques. Nous nous intéressons principalement à la satisfaction des contraintes temporelles et celles liées aux capacités des ressources de la plateforme cible. De plus, nous considérons l’optimisation des latences de bout-en-bout et la consommation mémoire. Les approches d’exploration architecturale présentées dans cette thèse sont basées sur la technique d’optimisation PLNE (programmation linéaire en nombres entiers) et concernent à la fois les applications activées périodiquement et celles dont l’activation est pilotée par les données. Contrairement à de nombreuses approches antérieures fournissant une solution partielle au problème de déploiement, les méthodes proposées considèrent l’ensemble du problème de déploiement. Les approches proposées dans cette thèse sont évaluées à l’aide d’applications génériques et industrielles. / Modern development methodologies from the industry and the academia exploit more and more the ”model” concept to address the complexity of critical real-time systems. These methodologies define a key stage in which the functional model, designed as a network of function blocks communicating through exchanged data signals, is deployed onto a hardware execution platform model and implemented in a software model consisting of a set of tasks and messages. This stage so-called deployment stage allows establishment of an operational architecture of the system, thus it requires evaluation and validation of the temporal properties of the system. In the context of event-driven real-time systems, the verification of temporal properties is performed using the schedulability analysis based on the response time analysis. Each deployment choice has an essential impact on the validity and the quality of the system. However, the existing methodologies do not provide supportto guide the designer of applications in the exploration of the operational architectures space. The objective of this thesis is to develop techniques for analysis and automatic synthesis of a valid operational architecture optimized with respect to the system performances. Our proposition is dedicated to the exploration of architectures space considering at the same time the four degrees of freedom determined during the deployment phase, (i) the placement of functional elements on the computing and communication resources of the execution platform, (ii) the partitioning of function elements into real time tasks and data signals into messages, (iii) the priority assignment to system tasks and messages and (iv) the assignment of shared data protection mechanism for periodic real-time systems. We are mainly interested in meeting temporal constraints and memory capacity of the target platform. In addition, we are focusing on the optimization of end-to-end latency and memory consumption. The design space exploration approaches presented in this thesis are based on the MILP (Mixed Integer Linear programming) optimization technique and concern at the same time time-driven and data-driven applications. Unlike many earlier approaches providing a partial solution to the deployment problem, our methods consider the whole deployment problem. The proposed approaches in this thesis are evaluated using both synthetic and industrial applications.
157

Improved Prediction of Adsorption-Based Life Support for Deep Space Exploration

Karen N. Son (5930285) 17 January 2019 (has links)
<div>Adsorbent technology is widely used in many industrial applications including waste heat recovery, water purification, and atmospheric revitalization in confined habitations. Astronauts depend on adsorbent-based systems to remove metabolic carbon dioxide (CO<sub>2</sub>) from the cabin atmosphere; as NASA prepares for the journey to Mars, engineers are redesigning the adsorbent-based system for reduced weight and optimal efficiency. These efforts hinge upon the development of accurate, predictive models, as simulations are increasingly relied upon to save cost and time over the traditional design-build-test approach. Engineers rely on simplified models to reduce computational cost and enable parametric optimizations. Amongst these simplified models is the axially dispersed plug-flow model for predicting the adsorbate concentration during flow through an adsorbent bed. This model is ubiquitously used in designing fixed-bed adsorption systems. The current work aims to improve the accuracy of the axially dispersed plug-flow model because of its wide-spread use. This dissertation identifies the critical model inputs that drive the overall uncertainty in important output quantities then systematically improves the measurement and prediction of these input parameters. Limitations of the axially dispersed plug-flow model are also discussed, and recommendations made for identifying failure of the plug-flow assumption.</div><div><br></div><div>An uncertainty and sensitivity analysis of an axially disperse plug-flow model is first presented. Upper and lower uncertainty bounds for each of the model inputs are found by comparing empirical correlations against experimental data from the literature. Model uncertainty is then investigated by independently varying each model input between its individual upper and lower uncertainty bounds then observing the relative change in predicted effluent concentration and temperature (<i>e.g.</i>, breakthrough time, bed capacity, and effluent temperature). This analysis showed that the LDF mass transfer coefficient is the largest source of uncertainty. Furthermore, the uncertainty analysis reveals that ignoring the effect of wall-channeling on apparent axial dispersion can cause significant error in the predicted breakthrough times of small-diameter beds.</div><div><br></div><div>In addition to LDF mass transfer coefficient and axial-dispersion, equilibrium isotherms are known to be strong lever arms and a potentially dominant source of model error. As such, detailed analysis of the equilibrium adsorption isotherms for zeolite 13X was conducted to improve the fidelity of CO<sub>2</sub> and H<sub>2</sub>O on equilibrium isotherms compared to extant data. These two adsorbent/adsorbate pairs are of great interest as NASA plans to use zeolite 13X in the next generation atmospheric revitalization system. Equilibrium isotherms describe a sorbent’s maximum capacity at a given temperature and adsorbate (<i>e.g.</i>, CO<sub>2</sub> or H<sub>2</sub>O) partial pressure. New isotherm data from NASA Ames Research Center and NASA Marshall Space Flight Center for CO<sub>2</sub> and H<sub>2</sub>O adsorption on zeolite 13X are presented. These measurements were carefully collected to eliminate sources of bias in previous data from the literature, where incomplete activation resulted in a reduced capacity. Several models are fit to the new equilibrium isotherm data and recommendations of the best model fit are made. The best-fit isotherm models from this analysis are used in all subsequent modeling efforts discussed in this dissertation.</div><div><br></div><div>The last two chapters examine the limitations of the axially disperse plug-flow model for predicting breakthrough in confined geometries. When a bed of pellets is confined in a rigid container, packing heterogeneities near the wall lead to faster flow around the periphery of the bed (<i>i.e.</i>, wall channeling). Wall-channeling effects have long been considered negligible for beds which hold more than 20 pellets across; however, the present work shows that neglecting wall-channeling effects on dispersion can yield significant errors in model predictions. There is a fundamental gap in understanding the mechanisms which control wall-channeling driven dispersion. Furthermore, there is currently no way to predict wall channeling effects a priori or even to identify what systems will be impacted by it. This dissertation aims to fill this gap using both experimental measurements and simulations to identify mechanisms which cause the plug-flow assumption to fail.</div><div><br></div><div>First, experimental evidence of wall-channeling in beds, even at large bed-to-pellet diameter ratios (<i>d</i><sub>bed</sub>/<i>d</i><sub>p</sub>=48) is presented. These experiments are then used to validate a method for accurately extracting mass transfer coefficients from data affected by significant wall channeling. The relative magnitudes of wall-channeling effects are shown to be a function of the adsorption/adsorbate pair and geometric confinement (<i>i.e.</i>, bed size). Ultimately, the axially disperse plug-flow model fails to capture the physics of breakthrough when nonplug-flow conditions prevail in the bed.</div><div><br></div><div>The final chapter of this dissertation develops a two-dimensional (2-D) adsorption model to examine the interplay of wall-channeling and adsorption kinetics and the adsorbent equilibrium capacity on breakthrough in confined geometries. The 2-D model incorporates the effect of radial variations in porosity on the velocity profile and is shown to accurately capture the effect of wall-channeling on adsorption behavior. The 2-D model is validated against experimental data, and then used to investigate whether capacity or adsorption kinetics cause certain adsorbates to exhibit more significant radial variations in concentration compared than others. This work explains channeling effects can vary for different adsorbate and/or adsorbent pairs—even under otherwise identical conditions—and highlights the importance of considering adsorption kinetics in addition to the traditional <i>d</i><sub>bed</sub>/<i>d</i><sub>p</sub> criteria.</div><div><br></div><div>This dissertation investigates key gaps in our understanding of fixed-bed adsorption. It will deliver insight into how these missing pieces impact the accuracy of predictive models and provide a means for reconciling these errors. The culmination of this work will be an accurate, predictive model that assists in the simulation-based design of the next-generation atmospheric revitalization system for humans’ journey to Mars.</div>
158

Co-diseño de sistemas hardware/software tolerantes a fallos inducidos por radiación

Restrepo Calle, Felipe 04 November 2011 (has links)
En la presente tesis se propone una metodología de desarrollo de estrategias híbridas para la mitigación de fallos inducidos por radiación en los sistemas empotrados modernos. La propuesta se basa en los principios del co-diseño de sistemas y consiste en la combinación selectiva, incremental y flexible de enfoques de tolerancia a fallos basados en hardware y software. Es decir, la exploración del espacio de soluciones se fundamenta en una estrategia híbrida de grano fino. El flujo de diseño está guiado por los requisitos de la aplicación. Esta metodología se ha denominado: co-endurecimiento. De esta forma, es posible diseñar sistemas embebidos confiables a bajo coste, donde no sólo se satisfagan los requisitos de confiabilidad y las restricciones de diseño, sino que también se evite el uso excesivo de costosos mecanismos de protección (hardware y software).
159

NASA på Nya äventyr i rymden : Populariseringen av den amerikanska visionen om rymden

Nord, Johan January 2008 (has links)
The purpose of this thesis is to bring forward and discuss the American vision for space exploration found at NASA's homepage, how the vision is popularised and why. NASA's homepage is analyzed as popularized science and the theoretical perspective emanates from the field of popular science and especially the work of Johan Kärnfelt. The historical reference is made up by Howard E McCurdy and he's thoughts about space and the American imagination. The analysis is based on a number of documents that popularize the vision for space exploration and is intended for the public. These documents describe the future plan for NASA and US space exploration. / Syftet med denna uppsats är att undersöka hur de amerikanska rymdvisionerna gestaltats på NASA:s hemsida, hur dessa populariseras samt vilka de bakomliggande drivkrafterna kan tänkas vara. NASA:s hemsida kommer att ses som populariserad vetenskap och det teoretiska perspektivet utgår från den populärvetenskapliga genren där en stor del av det teoretiska underlaget utgår från docenten Johan Kärnfelts tankar om populariseringen av vetenskap. Som historisk referens används professor Howard E. McCurdys tankar om den amerikanska rymdvisionen. Materialet som ingår i analysen är hämtat från NASA:s hemsida. Samtliga dokument handlar om den amerikanska visionen om rymden och USA:s fortsatta aktiviteter i rymden.
160

NASA på Nya äventyr i rymden : Populariseringen av den amerikanska visionen om rymden

Nord, Johan January 2008 (has links)
<p>The purpose of this thesis is to bring forward and discuss the American vision for space exploration found at NASA's homepage, how the vision is popularised and why. NASA's homepage is analyzed as popularized science and the theoretical perspective emanates from the field of popular science and especially the work of Johan Kärnfelt. The historical reference is made up by Howard E McCurdy and he's thoughts about space and the American imagination. The analysis is based on a number of documents that popularize the vision for space exploration and is intended for the public. These documents describe the future plan for NASA and US space exploration.</p> / <p>Syftet med denna uppsats är att undersöka hur de amerikanska rymdvisionerna gestaltats på NASA:s hemsida, hur dessa populariseras samt vilka de bakomliggande drivkrafterna kan tänkas vara. NASA:s hemsida kommer att ses som populariserad vetenskap och det teoretiska perspektivet utgår från den populärvetenskapliga genren där en stor del av det teoretiska underlaget utgår från docenten Johan Kärnfelts tankar om populariseringen av vetenskap. Som historisk referens används professor Howard E. McCurdys tankar om den amerikanska rymdvisionen. Materialet som ingår i analysen är hämtat från NASA:s hemsida. Samtliga dokument handlar om den amerikanska visionen om rymden och USA:s fortsatta aktiviteter i rymden.</p>

Page generated in 0.1059 seconds