• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 12
  • 9
  • 9
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 161
  • 161
  • 105
  • 43
  • 42
  • 32
  • 29
  • 21
  • 21
  • 20
  • 19
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Laboratory starlight simulator for future space-based heterodyne interferometry

Karlsson, William January 2023 (has links)
In astronomy, interferometry by ground-based telescopes offers the greatest angular resolution. However, the Earth´s atmosphere distorts the incident wavefront from a celestial object, leading to blurring and signal loss. It also restricts the transmission of specific wavelengths within the electromagnetic spectrum. Space-based interferometers would mitigate atmospheric obstruction and potentially enable even higher angular resolutions. The main challenge of implementing space-based interferometry is the necessity of matching the light´s optical path differences at the telescopes within the coherence length of the light utilizing physical delay lines. This thesis explores the potential realization of digital delay lines via heterodyne interferometry. The technique generates a heterodyne beat note at the frequency difference between the incident stellar light and a reference laser in the radio regime, permitting digitization of the delay line while preserving the phase information for image reconstruction. The primary objective of the thesis is to advance the field of astronomy by constructing a testbed environment for investigating future space-based heterodyne interferometry in the NIR light range. It requires the achievement of two main tasks. Firstly, a laboratory starlight simulator is developed to simulate a distant star´s wavefront appearance as it reaches telescopes on or around Earth. The consequent starlight simulator contains an optical assembly that manifests a point source in NIR light, aligned with a mirror collimator’s focal point, transforming the wavefront from spherical to planar. Secondly, a fiber optical circuit with interference capability is constructed, consisting of a free-space optical delay line and a polarization-controlled custom-sized fiber. The delay line matches the optical paths within the light's coherence length, while the polarization controller optimizes interference visibility. The completion of the tasks establishes the foundation to investigate space-based heterodyne interferometry in the NIR light with the potential implementation of delay line digitization.
152

Basil-GAN / Basilika-GAN

Risberg, Jonatan January 2022 (has links)
Developments in computer vision has sought to design deep neural networks which trained on a large set of images are able to generate high quality artificial images which share semantic qualities with the original image set. A pivotal shift was made with the introduction of the generative adversarial network (GAN) by Goodfellow et al.. Building on the work by Goodfellow more advanced models using the same idea have shown great improvements in terms of both image quality and data diversity. GAN models generate images by feeding samples from a vector space into a generative neural network. The structure of these so called latent vector samples show to correspond to semantic similarities of their corresponding generated images. In this thesis the DCGAN model is trained on a novel data set consisting of image sequences of the growth process of basil plants from germination to harvest. We evaluate the trained model by comparing the DCGAN performance on benchmark data sets such as MNIST and CIFAR10 and conclude that the model trained on the basil plant data set achieved similar results compared to the MNIST data set and better results in comparison to the CIFAR10 data set. To argue for the potential of using more advanced GAN models we compare the results from the DCGAN model with the contemporary StyleGAN2 model. We also investigate the latent vector space produced by the DCGAN model and confirm that in accordance with previous research, namely that the DCGAN model is able to generate a latent space with data specific semantic structures. For the DCGAN model trained on the data set of basil plants, the latent space is able to distinguish between images of early stage basil plants from late stage plants in the growth phase. Furthermore, utilizing the sequential semantics of the basil plant data set, an attempt at generating an artificial growth sequence is made using linear interpolation. Finally we present an unsuccessful attempt at visualising the latent space produced by the DCGAN model using a rudimentary approach at inverting the generator network function. / Utvecklingen inom datorseende har syftat till att utforma djupa neurala nätverk som tränas på en stor mängd bilder och kan generera konstgjorda bilder av hög kvalitet med samma semantiska egenskaper som de ursprungliga bilderna. Ett avgörande skifte skedde när Goodfellow et al. introducerade det generativa adversariella nätverket (GAN). Med utgångspunkt i Goodfellows arbete har flera mer avancerade modeller som använder samma idé uppvisat stora förbättringar när det gäller både bildkvalitet och datamångfald. GAN-modeller genererar bilder genom att mata in vektorer från ett vektorrum till ett generativt neuralt nätverk. Strukturen hos dessa så kallade latenta vektorer visar sig motsvara semantiska likheter mellan motsvarande genererade bilder. I detta examensarbete har DCGAN-modellen tränats på en ny datamängd som består av bildsekvenser av basilikaplantors tillväxtprocess från groning till skörd. Vi utvärderar den tränade modellen genom att jämföra DCGAN-modellen mot referensdataset som MNIST och CIFAR10 och drar slutsatsen att DCGAN tränad på datasetet för basilikaväxter uppnår liknande resultat jämfört med MNIST-dataset och bättre resultat jämfört med CIFAR10-datasetet. För att påvisa potentialen av att använda mer avancerade GAN-modeller jämförs resultaten från DCGAN-modellen med den mer avancerade StyleGAN2-modellen. Vi undersöker också det latenta vektorrum som produceras av DCGAN-modellen och bekräftar att DCGAN-modellen i enlighet med tidigare forskning kan generera ett latent rum med dataspecifika semantiska strukturer. För DCGAN-modellen som tränats på datamängden med basilikaplantor lyckas det latenta rummet skilja mellan bilder av basilikaplantor i tidiga stadier och sena stadier av plantor i tillväxtprocessen. Med hjälp av den sekventiella semantiken i datamängden för basilikaväxter gjörs dessutom ett försök att generera en artificiell tillväxtsekvens med hjälp av linjär interpolation. Slutligen presenterar vi ett misslyckat försök att visualisera det latenta rummet som produceras av DCGAN-modellen med hjälp av ett rudimentärt tillvägagångssätt för att invertera den generativa nätverksfunktionen.
153

Development of methodologies for memory management and design space exploration of SW/HW computer architectures for designing embedded systems / Ανάπτυξη μεθοδολογιών διαχείρισης μνήμης και εξερεύνησης σχεδιασμών σε αρχιτεκτονικές υπολογιστών υλικού/λογισμικού για σχεδίαση ενσωματωμένων συστημάτων

Κρητικάκου, Αγγελική 16 May 2014 (has links)
This PhD dissertation proposes innovative methodologies to support the designing and the mapping process of embedded systems. Due to the increasing requirements, embedded systems have become quite complex, as they consist of several partially dependent heterogeneous components. Systematic Design Space Exploration (DSE) methodologies are required to support the near-optimal design of embedded systems within the available short time-to-market. In this target domain, the existing DSE approaches either require too much exploration time to find near-optimal designs due to the high number of parameters and the correlations between the parameters of the target domain, or they end up with a less efficient trade-off result in order to find a design within acceptable time. In this dissertation we present an alternative DSE methodology, which is based on systematic creation of scalable and near-optimal DSE frameworks. The frameworks describe all the available options of the exploration space in a finite set of classes. A set of principles is presented which is used in the reusable DSE methodology to create a scalable and near-optimal framework and to efficiently use it to derive scalable and near-optimal design solutions within a Pareto trade-off space. The DSE reusable methodology is applied to several stages of the embedded system design flow to derive scalable and near-optimal methodologies. The first part of the dissertation is dedicated to the development of mapping methodologies for storing large embedded system data arrays in the lower layers of the on-chip background data memory hierarchy, and the second part to the DSE methodologies for the processing part of SW/HW architectures in embedded systems including the foreground memory systems. Existing mapping approaches for the background memory part are either enumerative, symbolic/polyhedral and worst case (heuristics) approximations. The enumerative approaches require too much exploration time, the worst case approximation lead to overestimation of the storage requirements, whereas the symbolic/polytope approaches are scalable and near-optimal for solid and regular iteration spaces. By applying the new reusable DSE methodology, we have developed an intra-signal in-place optimization methodology which is scalable and near-optimal for highly irregular access schemes. Scalable and near-optimal solutions for the different cases of the proposed methodology have been developed for the cases of non-overlapping and overlapping store and load access schemes. To support the proposed methodology, a new representation of the array access schemes, which is appropriate to express the irregular shapes in a scalable and near-optimal way, is presented. A general pattern formulation has been proposed which describes the access scheme in a compact and repetitive way. Pattern operations were developed to combine the patterns in a scalable and near-optimal way under all the potential pattern combination cases, which may exist in the application under study. In the processing oriented part of the dissertation, a DSE methodology is developed for mapping instance of a predefined target application domain onto a partially fixed architecture platform template, which consists of one processor core and several custom hardware accelerators. The DSE methodology consists of uni-directional steps, which are implemented through parametric templates and are applied without costly design iterations. The proposed DSE methodology explores the space by instantiating the steps and propagating design constraints which prune design options following the steps ordering. The result is a final Pareto trade-off curve with the most relevant near-optimal designs. As the scheduling and the assignment are the major tasks of both the foreground and the datapath, near-optimal and scalable techniques are required to support the parametric templates of the proposed DSE methodology. A framework which describes the scheduling and assignment of the scalars into the registers and the scheduling and assignment of the operation into the function units of the data path is developed. Based on the framework, a systematic methodology to arrive at parametric templates for scheduling and assignment techniques which satisfy the target domain constraints is developed. In this way, a scalable parametric template for scheduling and assignment tasks is created, which guarantees near-optimality for the domain under study. The developed template can be used in the Foreground Memory Management step and Data-path mapping step of the overall design flow. For the DSE of the domain under study, near-optimal results are hence achieved through a truly scalable technique. / Η παρούσα διδακτορική διατριβή προτείνει καινοτόμες μεθοδολογίες για τον σχεδιασμό και τη διαδικασία απεικόνισης σε ενσωματωμένα συστημάτα. Λόγω των αυξανόμενων απαιτήσεων, τα ενσωματωμένα συστήματα είναι αρκετά περίπλοκα, καθώς αποτελούνται από πολλά και εν μέρει εξαρτώμενα ετερογενή στοιχεία. Συστηματικές μεθοδολογίες για την εξερεύνηση του χώρου λύσεων (Design Space Exploration – DSE) απαιτούνται σχεδόν βέλτιστες σχεδιάσεις ενσωματωμένων συστημάτων εντός του διαθέσιμου χρονου. Οι υπάρχουσες DSE μεθοδολογίες απαιτούν είτε πάρα πολύ χρόνο εξερεύνησης για να βρουν τους σχεδόν βέλτιστους σχεδιασμούς, λόγω του μεγάλου αριθμού των παραμέτρων και τις συσχετίσεις μεταξύ των παραμέτρων, ή καταλήγουν με ένα λιγότερο βέλτιστο σχέδιο, προκειμένου να βρειθεί ένας σχεδιασμός εντός του διαθέσιμου χρόνου. Στην παρούσα διδακτορική διατριβή παρουσιάζουμε μια εναλλακτική DSE μεθοδολογία, η οποία βασίζεται στη συστηματική δημιουργία επεκτάσιμων και σχεδόν βέλτιστων DSE πλαισίων. Τα πλαίσια περιγράφουν όλες τις διαθέσιμες επιλογές στο χώρο εξερεύνησης με ένα πεπερασμένο σύνολο κατηγοριών. Ένα σύνολο αρχών χρησιμοποιείται στην επαναχρησιμοποιήούμενη DSE μεθοδολογία για να δημιουργήσει ένα επεκτάσιμο και σχεδόν βέλτιστο DSE πλαίσιο και να χρησιμοποιήθεί αποτελεσματικά για να δημιουργήσει επεκτάσιμες και σχεδόν βέλτιστες σχεδιαστικές λύσεις σε ένα Pareto Trade-off χώρο λύσεων. Η DSE μεθοδολογία εφαρμόζεται διάφορα στάδια της σχεδιαστικής ροής για ενσωματωμένα συστήματα και να δημιουργήσει επεκτάσιμες και σχεδόν βέλτιστες μεθοδολογίες. Το πρώτο μέρος της διατριβής είναι αφιερωμένο στην ανάπτυξη των μεθόδων απεικόνισης για την αποθήκευση μεγάλων πινάκων που χρησιμοποιούνται στα ενσωματωμένα συστήματα και αποθηκεύονται στα χαμηλότερα στρώματα της on-chip Background ιεραρχία μνήμης. Το δεύτερο μέρος είναι αφιερωμένο σε DSE μεθοδολογίες για το τμήμα επεξεργασίας σε αρχιτεκτονικές λογισμικού/υλικού σε ενσωματωμένα συστήματα, συμπεριλαμβανομένων των συστημάτων της προσκήνιας (foreground) μνήμης. Υπάρχουσες μεθοδολογίες απεικόνισης για την Background μνήμης είτε εξονυχιστικές, συμβολικές/πολυεδρικές και προσεγγίσεις με βάση τη χειρότερη περίπτωση. Οι εξονυχιστικές απαιτούν πάρα πολύ μεγάλο χρόνο εξερεύνησης, οι προσεγγίσεις οδηγούν σε υπερεκτίμηση των απαιτήσεων αποθήκευσης, ενώ οι συμβολικές είναι επεκτάσιμη και σχεδόν βέλτιστές μονο για τακτικούς χώρους επαναλήψεων. Με την εφαρμογή της προτεινόμενης DSE μεθοδολογίας αναπτύχθηκε μια επεκτάσιμη και σχεδόν βέλτιστη μεθοδολγοία για την εύρεση του αποθηκευτικού μεγέθους για τα δεδομένα ενός πίνακα για άτακτους και για τακτικούς χώρους επαναλήψεων. Προτάθηκε μια νέα αναπαράσταση των προσπελάσεων στη μνήμη, η οποία εκφράζει τα ακανόνιστα σχήματα στο χώρο επεναλήψεων με επακτάσιμο και σχεδόν βέλτιστο τρόπο. Στο δεύτερο τμήμα της διατριβής, μια DSE μεθοδολογία αναπτύχθηκε για το σχεδιασμό ενός προκαθορισμένου τομέα από εφαρμογές σε μια μερικώς αποφασισμένη αρχιτεκτονική πλατφόρμα, η οποία αποτελείται από ένα πυρήνα επεξεργαστή και αρκετούς συνεπεξεργαστές. Η DSE μεθοδολογία αποτελείται από μονής κατεύθυνσης βήματα, τα οποία υλοποιούνται μέσω παραμετρικών πλαισίων και εφαρμόζονται αποφέυγοντας τις δαπανηρές επαναλήψεις κατά τον σχεδιασμό. Η προτεινόμενη DSE μεθοδολογία εξερευνά το χώρο βρίσκοντας στιγμιότυπα για καθε βήμα και διαδίδονατς τις αποφάσεις μεταξύ βημάτων. Με αυτό το τρόπο κλαδεύουν τις επιλογές σχεδιασμού στα επόμενα βήματα. Το αποτέλεσμα είναι μια Pareto καμπύλη. Ένα DSE πλαίσιο προτάθηκε που περιγράφει τις τεχνικές χρονοπρογραμματισμού και ανάθεσης πόρων των καταχωρητών και των μονάδων εκτέλεσης του συστήματος. Προτάθηκε μια μεθοδολογία για να δημιουργεί σχεδόν βέλτιστα και επεκτάσιμα παραμετρικά πρότυπα για τον χρονοπρογραμματισμό και την ανάθεση πόρων που ικανοποιεί τους περιορισμούς ενός τομέα εφαρμογών.
154

Techniques d'analyse et d'optimisation pour la synthèse architecturale de systèmes temps réel embarqués distribués : problèmes de placement, de partitionnement et d'ordonnancement / Analysis and optimization techniques for the architectural synthesis of real time embedded and distributed systems

Mehiaoui, Asma 16 June 2014 (has links)
Dans le cadre industriel et académique, les méthodologies de développement logiciel exploitent de plus en plus le concept de “modèle” afin d’appréhender la complexité des systèmes temps réel critiques. En particulier, celles-ci définissent une étape dans laquelle un modèle fonctionnel, conçu comme un graphe de blocs fonctionnels communiquant via des échanges de signaux de données, est déployé sur un modèle de plateforme d’exécution matérielle et un modèle de plateforme d’exécution logicielle composé de tâches et de messages. Cette étape appelée étape de déploiement, permet d’établir une architecture opérationnelle du système nécessitant une validation des propriétés temporelles du système. Dans le contexte des systèmes temps réel dirigés par les évènements, la vérification des propriétés temporelles est réalisée à l’aide de l’analyse d’ordonnançabilité basée sur l’analyse des temps de réponse. Chaque choix de déploiement effectué a un impact essentiel sur la validité et la qualité du système. Néanmoins, les méthodologies existantes n’offrent pas de support permettant de guider le concepteur d’applications durant l’exploration de l’espace des architectures possibles. L’objectif de ces travaux de thèse consiste à mettre en place des techniques d’analyse et de synthèse automatiques permettant de guider le concepteur vers une architecture opérationnelle valide et optimisée par rapport aux performances du système. Notre proposition est dédiée à l’exploration de l’espace des architectures en tenant compte à la fois des quatre degrés de liberté déterminés durant la phase de déploiement, à savoir (j) le placement des éléments fonctionnels sur les éléments de calcul et de communication de la plateforme d’exécution, (ii) le partitionnement des éléments fonctionnels en tâches temps réel et des signaux de données en messages, (iii) l’affectation de priorités d’exécution aux tâches et aux messages du système et (iv) l’attribution du mécanisme de protection des données partagées pour les systèmes temps réel périodiques. Nous nous intéressons principalement à la satisfaction des contraintes temporelles et celles liées aux capacités des ressources de la plateforme cible. De plus, nous considérons l’optimisation des latences de bout-en-bout et la consommation mémoire. Les approches d’exploration architecturale présentées dans cette thèse sont basées sur la technique d’optimisation PLNE (programmation linéaire en nombres entiers) et concernent à la fois les applications activées périodiquement et celles dont l’activation est pilotée par les données. Contrairement à de nombreuses approches antérieures fournissant une solution partielle au problème de déploiement, les méthodes proposées considèrent l’ensemble du problème de déploiement. Les approches proposées dans cette thèse sont évaluées à l’aide d’applications génériques et industrielles. / Modern development methodologies from the industry and the academia exploit more and more the ”model” concept to address the complexity of critical real-time systems. These methodologies define a key stage in which the functional model, designed as a network of function blocks communicating through exchanged data signals, is deployed onto a hardware execution platform model and implemented in a software model consisting of a set of tasks and messages. This stage so-called deployment stage allows establishment of an operational architecture of the system, thus it requires evaluation and validation of the temporal properties of the system. In the context of event-driven real-time systems, the verification of temporal properties is performed using the schedulability analysis based on the response time analysis. Each deployment choice has an essential impact on the validity and the quality of the system. However, the existing methodologies do not provide supportto guide the designer of applications in the exploration of the operational architectures space. The objective of this thesis is to develop techniques for analysis and automatic synthesis of a valid operational architecture optimized with respect to the system performances. Our proposition is dedicated to the exploration of architectures space considering at the same time the four degrees of freedom determined during the deployment phase, (i) the placement of functional elements on the computing and communication resources of the execution platform, (ii) the partitioning of function elements into real time tasks and data signals into messages, (iii) the priority assignment to system tasks and messages and (iv) the assignment of shared data protection mechanism for periodic real-time systems. We are mainly interested in meeting temporal constraints and memory capacity of the target platform. In addition, we are focusing on the optimization of end-to-end latency and memory consumption. The design space exploration approaches presented in this thesis are based on the MILP (Mixed Integer Linear programming) optimization technique and concern at the same time time-driven and data-driven applications. Unlike many earlier approaches providing a partial solution to the deployment problem, our methods consider the whole deployment problem. The proposed approaches in this thesis are evaluated using both synthetic and industrial applications.
155

Improved Prediction of Adsorption-Based Life Support for Deep Space Exploration

Karen N. Son (5930285) 17 January 2019 (has links)
<div>Adsorbent technology is widely used in many industrial applications including waste heat recovery, water purification, and atmospheric revitalization in confined habitations. Astronauts depend on adsorbent-based systems to remove metabolic carbon dioxide (CO<sub>2</sub>) from the cabin atmosphere; as NASA prepares for the journey to Mars, engineers are redesigning the adsorbent-based system for reduced weight and optimal efficiency. These efforts hinge upon the development of accurate, predictive models, as simulations are increasingly relied upon to save cost and time over the traditional design-build-test approach. Engineers rely on simplified models to reduce computational cost and enable parametric optimizations. Amongst these simplified models is the axially dispersed plug-flow model for predicting the adsorbate concentration during flow through an adsorbent bed. This model is ubiquitously used in designing fixed-bed adsorption systems. The current work aims to improve the accuracy of the axially dispersed plug-flow model because of its wide-spread use. This dissertation identifies the critical model inputs that drive the overall uncertainty in important output quantities then systematically improves the measurement and prediction of these input parameters. Limitations of the axially dispersed plug-flow model are also discussed, and recommendations made for identifying failure of the plug-flow assumption.</div><div><br></div><div>An uncertainty and sensitivity analysis of an axially disperse plug-flow model is first presented. Upper and lower uncertainty bounds for each of the model inputs are found by comparing empirical correlations against experimental data from the literature. Model uncertainty is then investigated by independently varying each model input between its individual upper and lower uncertainty bounds then observing the relative change in predicted effluent concentration and temperature (<i>e.g.</i>, breakthrough time, bed capacity, and effluent temperature). This analysis showed that the LDF mass transfer coefficient is the largest source of uncertainty. Furthermore, the uncertainty analysis reveals that ignoring the effect of wall-channeling on apparent axial dispersion can cause significant error in the predicted breakthrough times of small-diameter beds.</div><div><br></div><div>In addition to LDF mass transfer coefficient and axial-dispersion, equilibrium isotherms are known to be strong lever arms and a potentially dominant source of model error. As such, detailed analysis of the equilibrium adsorption isotherms for zeolite 13X was conducted to improve the fidelity of CO<sub>2</sub> and H<sub>2</sub>O on equilibrium isotherms compared to extant data. These two adsorbent/adsorbate pairs are of great interest as NASA plans to use zeolite 13X in the next generation atmospheric revitalization system. Equilibrium isotherms describe a sorbent’s maximum capacity at a given temperature and adsorbate (<i>e.g.</i>, CO<sub>2</sub> or H<sub>2</sub>O) partial pressure. New isotherm data from NASA Ames Research Center and NASA Marshall Space Flight Center for CO<sub>2</sub> and H<sub>2</sub>O adsorption on zeolite 13X are presented. These measurements were carefully collected to eliminate sources of bias in previous data from the literature, where incomplete activation resulted in a reduced capacity. Several models are fit to the new equilibrium isotherm data and recommendations of the best model fit are made. The best-fit isotherm models from this analysis are used in all subsequent modeling efforts discussed in this dissertation.</div><div><br></div><div>The last two chapters examine the limitations of the axially disperse plug-flow model for predicting breakthrough in confined geometries. When a bed of pellets is confined in a rigid container, packing heterogeneities near the wall lead to faster flow around the periphery of the bed (<i>i.e.</i>, wall channeling). Wall-channeling effects have long been considered negligible for beds which hold more than 20 pellets across; however, the present work shows that neglecting wall-channeling effects on dispersion can yield significant errors in model predictions. There is a fundamental gap in understanding the mechanisms which control wall-channeling driven dispersion. Furthermore, there is currently no way to predict wall channeling effects a priori or even to identify what systems will be impacted by it. This dissertation aims to fill this gap using both experimental measurements and simulations to identify mechanisms which cause the plug-flow assumption to fail.</div><div><br></div><div>First, experimental evidence of wall-channeling in beds, even at large bed-to-pellet diameter ratios (<i>d</i><sub>bed</sub>/<i>d</i><sub>p</sub>=48) is presented. These experiments are then used to validate a method for accurately extracting mass transfer coefficients from data affected by significant wall channeling. The relative magnitudes of wall-channeling effects are shown to be a function of the adsorption/adsorbate pair and geometric confinement (<i>i.e.</i>, bed size). Ultimately, the axially disperse plug-flow model fails to capture the physics of breakthrough when nonplug-flow conditions prevail in the bed.</div><div><br></div><div>The final chapter of this dissertation develops a two-dimensional (2-D) adsorption model to examine the interplay of wall-channeling and adsorption kinetics and the adsorbent equilibrium capacity on breakthrough in confined geometries. The 2-D model incorporates the effect of radial variations in porosity on the velocity profile and is shown to accurately capture the effect of wall-channeling on adsorption behavior. The 2-D model is validated against experimental data, and then used to investigate whether capacity or adsorption kinetics cause certain adsorbates to exhibit more significant radial variations in concentration compared than others. This work explains channeling effects can vary for different adsorbate and/or adsorbent pairs—even under otherwise identical conditions—and highlights the importance of considering adsorption kinetics in addition to the traditional <i>d</i><sub>bed</sub>/<i>d</i><sub>p</sub> criteria.</div><div><br></div><div>This dissertation investigates key gaps in our understanding of fixed-bed adsorption. It will deliver insight into how these missing pieces impact the accuracy of predictive models and provide a means for reconciling these errors. The culmination of this work will be an accurate, predictive model that assists in the simulation-based design of the next-generation atmospheric revitalization system for humans’ journey to Mars.</div>
156

Co-diseño de sistemas hardware/software tolerantes a fallos inducidos por radiación

Restrepo Calle, Felipe 04 November 2011 (has links)
En la presente tesis se propone una metodología de desarrollo de estrategias híbridas para la mitigación de fallos inducidos por radiación en los sistemas empotrados modernos. La propuesta se basa en los principios del co-diseño de sistemas y consiste en la combinación selectiva, incremental y flexible de enfoques de tolerancia a fallos basados en hardware y software. Es decir, la exploración del espacio de soluciones se fundamenta en una estrategia híbrida de grano fino. El flujo de diseño está guiado por los requisitos de la aplicación. Esta metodología se ha denominado: co-endurecimiento. De esta forma, es posible diseñar sistemas embebidos confiables a bajo coste, donde no sólo se satisfagan los requisitos de confiabilidad y las restricciones de diseño, sino que también se evite el uso excesivo de costosos mecanismos de protección (hardware y software).
157

NASA på Nya äventyr i rymden : Populariseringen av den amerikanska visionen om rymden

Nord, Johan January 2008 (has links)
The purpose of this thesis is to bring forward and discuss the American vision for space exploration found at NASA's homepage, how the vision is popularised and why. NASA's homepage is analyzed as popularized science and the theoretical perspective emanates from the field of popular science and especially the work of Johan Kärnfelt. The historical reference is made up by Howard E McCurdy and he's thoughts about space and the American imagination. The analysis is based on a number of documents that popularize the vision for space exploration and is intended for the public. These documents describe the future plan for NASA and US space exploration. / Syftet med denna uppsats är att undersöka hur de amerikanska rymdvisionerna gestaltats på NASA:s hemsida, hur dessa populariseras samt vilka de bakomliggande drivkrafterna kan tänkas vara. NASA:s hemsida kommer att ses som populariserad vetenskap och det teoretiska perspektivet utgår från den populärvetenskapliga genren där en stor del av det teoretiska underlaget utgår från docenten Johan Kärnfelts tankar om populariseringen av vetenskap. Som historisk referens används professor Howard E. McCurdys tankar om den amerikanska rymdvisionen. Materialet som ingår i analysen är hämtat från NASA:s hemsida. Samtliga dokument handlar om den amerikanska visionen om rymden och USA:s fortsatta aktiviteter i rymden.
158

NASA på Nya äventyr i rymden : Populariseringen av den amerikanska visionen om rymden

Nord, Johan January 2008 (has links)
<p>The purpose of this thesis is to bring forward and discuss the American vision for space exploration found at NASA's homepage, how the vision is popularised and why. NASA's homepage is analyzed as popularized science and the theoretical perspective emanates from the field of popular science and especially the work of Johan Kärnfelt. The historical reference is made up by Howard E McCurdy and he's thoughts about space and the American imagination. The analysis is based on a number of documents that popularize the vision for space exploration and is intended for the public. These documents describe the future plan for NASA and US space exploration.</p> / <p>Syftet med denna uppsats är att undersöka hur de amerikanska rymdvisionerna gestaltats på NASA:s hemsida, hur dessa populariseras samt vilka de bakomliggande drivkrafterna kan tänkas vara. NASA:s hemsida kommer att ses som populariserad vetenskap och det teoretiska perspektivet utgår från den populärvetenskapliga genren där en stor del av det teoretiska underlaget utgår från docenten Johan Kärnfelts tankar om populariseringen av vetenskap. Som historisk referens används professor Howard E. McCurdys tankar om den amerikanska rymdvisionen. Materialet som ingår i analysen är hämtat från NASA:s hemsida. Samtliga dokument handlar om den amerikanska visionen om rymden och USA:s fortsatta aktiviteter i rymden.</p>
159

Développement de modèles physiques pour comprendre la croissance des plantes en environnement de gravité réduite pour des apllications dans les systèmes support-vie / Developing physical models to understand the growth of plants in reduced gravity environments for applications in life-support systems

Poulet, Lucie 11 July 2018 (has links)
Les challenges posés par les missions d’exploration du système solaire sont très différents de ceux de la Station Spatiale Internationale, puisque les distances sont beaucoup plus importantes, limitant la possibilité de ravitaillements réguliers. Les systèmes support-vie basés sur des plantes supérieures et des micro-organismes, comme le projet de l’Agence Spatiale Européenne (ESA) MELiSSA (Micro Ecological Life Support System Alternative) permettront aux équipages d’être autonomes en termes de production de nourriture, revitalisation de l’air et de recyclage d’eau, tout en fermant les cycles de l’eau, de l’oxygène, de l’azote et du carbone, pendant les missions longue durée, et deviendront donc essentiels.La croissance et le développement des plantes et autres organismes biologiques sont fortement influencés par les conditions environnementales (par exemple la gravité, la pression, la température, l’humidité relative, les pressions partielles en O2 et CO2). Pour prédire la croissance des plantes dans ces conditions non-standard, il est crucial de développer des modèles de croissance mécanistiques, permettant une étude multi-échelle des différents phénomènes, ainsi que d’acquérir une compréhension approfondie de tous les processus impliqués dans le développement des plantes en environnement de gravité réduite et d’identifier les lacunes de connaissance.En particulier, les échanges gazeux à la surface de la feuille sont altérés en gravité réduite, ce qui pourrait diminuer la croissance des plantes dans l’espace. Ainsi, nous avons étudié les relations complexes entre convection forcée, niveau de gravité et production de biomasse et avons trouvé que l’inclusion de la gravité comme paramètre dans les modèles d’échanges gazeux des plantes nécessite une description précise des transferts de matière et d’énergie dans la couche limite. Nous avons ajouté un bilan d’énergie au bilan de masse du modèle de croissance de plante déjà existant et cela a ajouté des variations temporelles sur la température de surface des feuilles.Cette variable peut être mesurée à l’aide de caméras infra-rouges et nous avons réalisé une expérience en vol parabolique et cela nous a permis de valider des modèles de transferts gazeux locaux en 0g et 2g, sans ventilation.Enfin, le transport de sève, la croissance racinaire et la sénescence des feuilles doivent être étudiés en conditions de gravité réduite. Cela permettrait de lier notre modèle d’échanges gazeux à la morphologie des plantes et aux allocations de ressources dans une plante et ainsi arriver à un modèle mécanistique complet de la croissance des plantes en environnement de gravité réduite. / Challenges triggered by human space exploration of the solar system are different from those of the International Space Station because distances and time frames are of a different scale, preventing frequent resupplies. Bioregenerative life-support systems based on higher plants and microorganisms, such as the ESA Micro-Ecological Life Support System Alternative (MELiSSA) project will enable crews to be autonomous in food production, air revitalization, and water recycling, while closing cycles for water, oxygen, nitrogen, and carbon, during long-duration missions and will thus become necessary.The growth and development of higher plants and other biological organisms are strongly influenced by environmental conditions (e.g. gravity, pressure, temperature, relative humidity, partial pressure of O2 or CO2). To predict plant growth in these non-standard conditions, it is crucial to develop mechanistic models of plant growth, enabling multi-scale study of different phenomena, as well as gaining thorough understanding on all processes involved in plant development in low gravity environment and identifying knowledge gaps.Especially gas exchanges at the leaf surface are altered in reduced gravity, which could reduce plant growth in space. Thus, we studied the intricate relationships between forced convection, gravity levels and biomass production and found that the inclusion of gravity as a parameter in plant gas exchanges models requires accurate mass and heat transfer descriptions in the boundary layer. We introduced an energy coupling to the already existing mass balance model of plant growth and this introduced time-dependent variations of the leaf surface temperature.This variable can be measured using infra-red cameras and we implemented a parabolic flight experiment, which enabled us to validate local gas transfer models in 0g and 2g without ventilation.Finally, sap transport needs to be studied in reduced gravity environments, along with root absorption and leaf senescence. This would enable to link our gas exchanges model to plant morphology and resources allocations, and achieve a complete mechanistic model of plant growth in low gravity environments.
160

Crest Factor Reduction using High Level Synthesis

Mahmood, Hassan January 2017 (has links)
Modern wireless mobile communication technology has made noticeable improvements from the technologies in the past but is still plagued by poor power efficiency of power amplifiers found in today’s base stations. One of the factors that affect the power efficiency adversely comes from modern modulation techniques like orthogonal frequency division multiplexing which result in signals with high peak to average power ratio, also known as the crest factor. Crest factor reduction algorithms are used to solve this problem. However, the dominant method of hardware description for synthesis has been to start with writing register transfer level code which gives a very fixed implementation that may not be the optimal solution. This thesis project is focused on developing a peak cancellation crest factor reduction system, using a high-level language as the system design language, and synthesizing it using high-level synthesis. The aim is to find out if highlevel synthesis design methodology can yield increased productivity and improved quality of results for such designs as compared to the design methodology that requires the system to be implemented at the register transfer level. Design space exploration is performed to find an optimal design with respect to area. Finally, a few parameters are presented to measure the performance of the system, which helps in tuning it. The results of design space exploration helped in choosing the best possible implementation out of four different configurations. The final implementation that resulted from high-level synthesis had an area comparable to the previous register transfer level implementation. It was also concluded that, for this design, the high-level synthesis design methodology increased productivity and decreased design time. / Användning av högnivåsyntes för reduktion av toppfaktor Det har gjorts noterbara framsteg inom modern trådlös kommunikationsteknik för mobiltelefoni, men tekniken plågas fortfarande av dålig energieffektivitet hos förstärkarna i dagens basstationer. En faktor som påverkar energieffektiviteten negativt är om signaler har en stor skillnad mellan maximal effekt och medeleffekt. Kvoten mellan maximal effekt och medeleffekt kallas för toppfaktor, och en egenskap hos moderna moduleringstekniker, såsom ortogonal frekvensdelningsmodulering, är att de har en hög toppfaktor. Algoritmer för reducering av toppfaktor kan lösa det problemet. Den dominerande metoden för design av hårdvara är att skriva kod i ett hårdvarubeskrivande språk med abstraktionsnivån Register Transfer Level och sedan använda verktyg för att syntetisera hårdvara från koden. Resultatet är en specifik implementation som inte nödvändigtvis är den optimala lösningen. Det här examensarbetet är inriktat på att utveckla ett system för reducering av toppfaktor, baserat på algoritmen Peak Cancellation, genom att skriva kod i ett högnivåspråk och använda verktyg för högnivåsyntes för att syntetisera designen. Syftet är att ta reda på om högnivåsyntes som designmetod kan ge ökad produktivitet och ökad kvalitet, för den här typen av design, jämfört med den klassiska designmetoden med abstraktionsnivån Register Transfer Level. Verktyget för högnivåsyntes användes för att på ett effektivt sätt undersöka olika designalternativ för att optimera kretsytan. I rapporten presenteras ett antal parametrar för att mäta prestandan hos systemet, vilket ger information som kan användas för finjustering. Resultatet av undersökningen av designalternativ gjorde det möjligt att välja den bästa implementationen bland fyra olika konfigurationer. Den slutgiltiga implementationen hade en kretsyta som är jämförbar med en tidigare design som implementerats med hårdvarubeskrivande språk med abstraktionsnivån Register Transfer Level. En annan slutsats är att, för den här designen, så gav designmetoden med högnivåsyntes ökad produktivitet och minskad designtid.

Page generated in 0.1322 seconds