• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 59
  • 17
  • 7
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 108
  • 26
  • 25
  • 23
  • 22
  • 21
  • 19
  • 18
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Προδιαγραφές μιας καινοτόμας πλατφόρμας ηλεκτρονικής μάθησης που ενσωματώνει τεχνικές επεξεργασίας φυσικής γλώσσας

Φερφυρή, Ναυσικά 04 September 2013 (has links)
Ζούμε σε μια κοινωνία στην οποία η χρήση της τεχνολογίας έχει εισβάλει δυναμικά στην καθημερινότητα.Η εκπαίδευση δεν θα μπορούσε να μην επηρεαστεί απο τις Νέες Τεχνολογίες.Ήδη,όροι όπως “Ηλεκτρονική Μάθηση” και ”Ασύγχρονη Τηλε-εκπαίδευση” έχουν δημιουργήσει νέα δεδομένα στην κλασική Εκπαίδευση. Με τον όρο ασύγχρονη τηλε-εκπαίδευση εννοούμε μια διαδικασία ανταλλαγής μάθησης μεταξύ εκπαιδευτή - εκπαιδευομένων,που πραγματοποιείται ανεξάρτητα χρόνου και τόπου. Ηλεκτρονική Μάθηση είναι η χρήση των νέων πολυμεσικών τεχνολογιών και του διαδικτύου για τη βελτίωση της ποιότητας της μάθησης,διευκολύνοντας την πρόσβαση σε πηγές πληροφοριών και σε υπηρεσίες καθώς και σε ανταλλαγές και εξ'αποστάσεως συνεργασίες.Ο όρος καλύπτει ένα ευρύ φάσμα εφαρμογών και διαδικασιών,όπως ηλεκτρονικές τάξεις και ψηφιακές συνεργασίες, μάθηση βασιζόμενη στους ηλεκτρονικούς υπολογιστές και στις τεχνολογίες του παγκόσμιου ιστού. Κάποιες απο τις βασικές απαιτήσεις που θα πρέπει να πληρούνται για την δημιουργία μιας πλατφόρμας ηλεκτρονικής μάθησης είναι: Να υποστηρίζει τη δημιουργία βημάτων συζήτησης (discussion forums) και “δωματίων συζήτησης”(chat rooms),να υλοποιεί ηλεκτρονικό ταχυδρομείο,να έχει φιλικό περιβάλλον τόσο για το χρήστη/μαθητή όσο και για το χρήστη/καθηγητή,να υποστηρίζει προσωποποίηση(customization)του περιβάλλοντος ανάλογα με το χρήστη.Επίσης να κρατάει πληροφορίες(δημιουργία profiles)για το χρήστη για να τον “βοηθάει”κατά την πλοήγηση,να υποστηρίζει την εύκολη δημιουργία διαγωνισμάτων(online tests), να υποστηρίζει την παρουσίαση πολυμεσικών υλικών. Ως επεξεργασία φυσικής γλώσσας (NLP) ορίζουμε την υπολογιστική ανάλυση αδόμητων δεδομένων σε κείμενα, με σκοπό την επίτευξη μηχανικής κατανόησης του κειμένου αυτού.Είναι η επεξεργασία προτάσεων που εισάγονται ή διαβάζονται από το σύστημα,το οποίο απαντά επίσης με προτάσεις με τρόπο τέτοιο που να θυμίζει απαντήσεις μορφωμένου ανθρώπου. Βασικό ρόλο παίζει η γραμματική,το συντακτικό,η ανάλυση των εννοιολογικών στοιχείων και γενικά της γνώσης, για να γίνει κατανοητή η ανθρώπινη γλώσσα από τη μηχανή. Οι βασικές τεχνικές επεξεργασίας φυσικού κειμένου βασίζονται στις γενικές γνώσεις σχετικά με τη φυσική γλώσσα.Χρησιμοποιούν ορισμένους απλούς ευρετικούς κανόνες οι οποίοι στηρίζονται στη συντακτική και σημασιολογική προσέγγιση και ανάλυση του κειμένου.Ορισμένες τεχνικές που αφορούν σε όλα τα πεδία εφαρμογής είναι: ο διαμερισμός στα συστατικά στοιχεία του κειμένου (tokenization), η χρήση της διάταξης του κειμένου (structural data mining), η απαλοιφή λέξεων που δεν φέρουν ουσιαστική πληροφορία (elimination of insignificant words),η γραμματική δεικτοδότηση (PoS tagging), η μορφολογική ανάλυση και η συντακτική ανάλυση. Στόχος της παρούσας διπλωματικής είναι να περιγράψει και να αξιολογήσει πως οι τεχνικές επεξεργασίας της φυσικής γλώσσας (NLP), θα μπορούσαν να αξιοποιηθούν για την ενσωμάτωση τους σε πλατφόρμες ηλεκτρονικής μάθησης.Ο μεγάλος όγκος δεδομένων που παρέχεται μέσω μιας ηλεκτρονικής πλατφόρμας μάθησης, θα πρέπει να μπορεί να διαχειριστεί , να διανεμηθεί και να ανακτηθεί σωστά.Κάνοντας χρήση των τεχνικών NLP θα παρουσιαστεί μια καινοτόμα πλατφόρμα ηλεκτρονικής μάθησης,εκμεταλεύοντας τις υψηλού επιπέδου τεχνικές εξατομίκευσης, την δυνατότητα εξαγωγής συμπερασμάτων επεξεργάζοντας την φυσική γλώσσα των χρηστών προσαρμόζοντας το προσφερόμενο εκπαιδευτικό υλικό στις ανάγκες του κάθε χρήστη. / We live in a society in which the use of technology has entered dynamically in our life,the education could not be influenced by new Technologies. Terms such as "e-Learning" and "Asynchronous e-learning" have created new standards in the classical Education. By the term “asynchronous e-learning” we mean a process of exchange of learning between teacher & student, performed regardless of time and place. E-learning is the use of new multimedia technologies and the Internet to improve the quality of learning by facilitating access to information resources and services as well as remote exchanges .The term covers a wide range of applications and processes, such electronic classrooms, and digital collaboration, learning based on computers and Web technologies. Some of the basic requirements that must be met to establish a platform for e-learning are: To support the creation of forums and chat rooms, to deliver email, has friendly environment for both user / student and user / teacher, support personalization depending to the user . Holding information (creating profiles) for the user in order to provide help in the navigation, to support easy creating exams (online tests), to support multimedia presentation materials. As natural language processing (NLP) define the computational analysis of unstructured data in text, to achieve mechanical understanding of the text. To elaborate proposals that imported or read by the system, which also responds by proposals in a manner that reminds answers of educated man. A key role is played by the grammar, syntax, semantic analysis of data and general knowledge to understand the human language of the machine. The main natural text processing techniques based on general knowledge about natural language .This techniques use some simple heuristic rules based on syntactic and semantic analysis of the text. Some of the techniques pertaining to all fields of application are: tokenization, structural data mining, elimination of insignificant words, PoS tagging, analyzing the morphological and syntactic analysis. The aim of this study is to describe and evaluate how the techniques of natural language processing (NLP), could be used for incorporation into e-learning platforms. The large growth of data delivered through an online learning platform, should be able to manage, distributed and retrieved. By the use of NLP techniques will be presented an innovative e-learning platform, using the high level personalization techniques, the ability to extract conclusions digesting the user's natural language by customizing the offered educational materials to the needs of each user .
72

[en] JAAF: IMPLEMENTING SERVICE ORIENTED SELF-ADAPTIVE AGENTS / [pt] JAAF: IMPLEMENTANDO AGENTES AUTO-ADAPTATIVOS ORIENTADOS A SERVIÇOS

BALDOINO FONSECA DOS SANTOS NETO 29 September 2010 (has links)
[pt] Sistemas Multiagentes Orientados a Serviços (SOMS) têm surgido visando incorporar os benefícios de duas disciplinas da Engenharia de Software: Arquitetura Orientada a Serviços e Engenharia de Software Orientada a Agentes. A primeira visa fornecer serviços fracamente acoplados que podem ser utilizados em diferentes domínios. A segunda se baseia em um novo paradigma que visa o desenvolvimento de Sistemas Multiagentes, que são compostos por entidades, chamadas agentes, com raciocínio, autonomia e pró-atividade. Um dos principais objetivos de SOMS é ajudar no desenvolvimento de sistemas orientados a serviços capazes de adaptar-se em ambientes computacionais dinâmicos, onde é necessário reagir às mudanças em seus requisitos em tempo de execução, assim como, adaptar-se eficientemente diante de erros de execução e baixa qualidade de serviço. Neste contexto, este trabalho propõe um framework (Java self-Adaptive Agent Framework - JAAF) para implementar agentes autoadaptativos capazes de autonomamente e pró-ativamente descobrir serviços, selecionar o mais apropriado e adaptar-se caso algum problema ocorra durante a execução do serviço. A aplicabilidade do framework proposto é demonstrada através de dois estudos de caso. O primeiro é um sistema responsável por gerar mapas de susceptibilidades, ou seja, mapas que mostram locais com risco de deslizamento em determinada área. O segundo é um sistema onde o principal objetivo é satisfazer necessidades, relacionadas à viagens, de usuários. / [en] Service Oriented Multi-agent Systems (SOMS) have emerged in order to incorporate the benefits of two software engineering disciplines: Serviceoriented Architecture and Agent-oriented Software Engineering. The first provides loosely coupled services that can be used within different domains. The second is based on a new software engineering paradigm that addresses the development of Multi-agent Systems, which are composed of autonomous, pro-active and reasoning entities, named software agents. One of the main goal of SOMS is to help the development of service-oriented systems able to adapt themselves on dynamic computing environments. Those systems must be able to react at runtime to changes in their requirements, as well as to efficiently accommodate for deviations from their expected functionality or quality of services. In this context, this work proposes a framework (Java self- Adaptive Agent Framework - JAAF) to implement self-adaptive agents able to autonomously and pro-actively discover services, decide about the most appropriate one and adapt themselves if they face a problem while executing the service. The applicability of the proposed framework is demonstrated through two case studies. The first is a system responsible for generating susceptibility maps, i.e., maps that show locations with landslides risks in a given area. The second is a system where the main goal is to satisfy the users’ needs related to travel.
73

Métodos sem malha e método dos elementos finitos generalizados em análise não-linear de estruturas / Meshless Methods and Generalized Finite Element Method in Structural Nonlinear Analysis

Felício Bruzzi Barros 27 March 2002 (has links)
O Método dos Elementos Finitos Generalizados, MEFG, compartilha importantes características dos métodos sem malha. As funções de aproximação do MEFG, atreladas aos pontos nodais, são enriquecidas de modo análogo ao refinamento p realizado no Método das Nuvens hp. Por outro lado, por empregar uma malha de elementos para construir as funções partição da unidade, ele também pode ser entendido como uma forma não convencional do Método dos Elementos Finitos. Neste trabalho, ambas as interpretações são consideradas. Os métodos sem malha, particularmente o Método de Galerkin Livre de Elementos e o Método das Nuvens hp, são introduzidos com o propósito de estabelecer os conceitos fundamentais para a descrição do MEFG. Na seqüência, apresentam-se aplicações numéricas em análise linear e evidenciam-se características que tornam o MEFG interessante para a simulação da propagação de descontinuidades. Após discutir os modelos de dano adotados para representar o comportamento não-linear do material, são introduzidos exemplos de aplicação, inicialmente do Método das Nuvens hp e depois do MEFG, na análise de estruturas de concreto. Os resultados obtidos servem de argumento para a implementação de um procedimento p-adaptativo, particularmente com o MEFG. Propõe-se, então a adaptação do Método dos Resíduos em Elementos Equilibrados à formulação do MEFG. Com vistas ao seu emprego em problemas não-lineares, algumas modificações são introduzidas à formulação do estimador. Mostra-se que a medida obtida para representar o erro, apesar de fundamentada em diversas hipóteses nem sempre possíveis de serem satisfeitas, ainda assim viabiliza a análise não-linear p-adaptativa. Ao final, são enumeradas propostas para a aplicação do MEFG em problemas caracterizados pela propagação de defeitos / The Generalized Finite Element Method, GFEM, shares several features with the so called meshless methods. The approximation functions used in the GFEM are associated with nodal points like in meshless methods. In addition, the enrichment of the approximation spaces can be done in the same fashion as in the meshless hp-Cloud method. On the other hand, the partition of unity used in the GFEM is provided by Lagrangian finite element shape functions. Therefore, this method can also be understood as a variation of the Finite Element Method. Indeed, both interpretations of the GFEM are valid and give unique insights into the method. The meshless character of the GFEM justified the investigation of meshless methods in this work. Among them, the Element Free Galerkin Method and the hp-Cloud Method are described aiming to introduce key concepts of the GFEM formulation. Following that, several linear problems are solved using these three methods. Such linear analysis demonstrates several features of the GFEM and its suitability to simulate propagating discontinuities. Next, damage models employed to model the nonlinear behavior of concrete structures are discussed and numerical analysis using the hp-Cloud Method and the GFEM are presented. The results motivate the implementation of a p-adaptive procedure tailored to the GFEM. The technique adopted is the Equilibrated Element Residual Method. The estimator is modified to take into account nonlinear peculiarities of the problems considered. The hypotheses assumed in the definition of the error measure are sometimes violated. Nonetheless, it is shown that the proposed error indicator is effective for the class of p-adaptive nonlinear analysis investigated. Finally, several suggestions are enumerated considering future applications of the GFEM, specially for the simulation of damage and crack propagation
74

Odhady algebraické chyby a zastavovací kritéria v numerickém řešení parciálních diferenciálních rovnic / Odhady algebraické chyby a zastavovací kritéria v numerickém řešení parciálních diferenciálních rovnic

Papež, Jan January 2011 (has links)
Title: Estimation of the algebraic error and stopping criteria in numerical solution of partial differential equations Author: Jan Papež Department: Department of Numerical Mathematics Supervisor of the master thesis: Zdeněk Strakoš Abstract: After introduction of the model problem and its properties we describe the Conjugate Gradient Method (CG). We present the estimates of the energy norm of the error and a heuristic for the adaptive refinement of the estimate. The difference in the local behaviour of the discretization and the algebraic error is illustrated by numerical experiments using the given model problem. A posteriori estimates for the discretization and the total error that take into account the inexact solution of the algebraic system are then discussed. In order to get a useful perspective, we briefly recall the multigrid method. Then the Cascadic Conjugate Gradient Method of Deuflhard (CCG) is presented. Using the estimates for the error presented in the preceding parts of the thesis, the new stopping criteria for CCG are proposed. The CCG method with the new stopping criteria is then tested. Keywords: numerical PDE, discretization error, algebraic error, error es- timates, locality of the error, adaptivity
75

Adaptivní hp nespojitá Galerkinova metoda pro nestacionární stlačitelné Eulerovy rovnice / Adaptivní hp nespojitá Galerkinova metoda pro nestacionární stlačitelné Eulerovy rovnice

Korous, Lukáš January 2012 (has links)
The compressible Euler equations describe the motion of compressible inviscid fluids. They are used in many areas ranging from aerospace, automotive, and nuclear engineering to chemistry, ecology, climatology, and others. Mathematically, the compressible Euler equations represent a hyperbolic system consisting of several nonlinear partial differential equations (conservation laws). These equations are solved most frequently by means of Finite Volume Methods (FVM) and low-order Finite Element Methods (FEM). However, both these approaches are lacking higher order accuracy and moreover, it is well known that conforming FEM is not the optimal tool for the discretization of first-order equations. The most promissing approach to the approximate solution of the compressible Euler equations is the discontinuous Galerkin method that combines the stability of FVM, with excellent approximation properties of higher-order FEM. The objective of this Master Thesis was to develop, implement and test new adaptive algorithms for the nonstationary compressible Euler equations based on higher-order discontinuous Galerkin (hp-DG) methods. The basis for the new methods were the discontinuous Galerkin methods and space-time adaptive hp-FEM algorithms on dynamical meshes for nonstationary second-order problems. The new algorithms...
76

Advanced Numerical Modelling of Discontinuities in Coupled Boundary ValueProblems

Kästner, Markus 18 August 2016 (has links)
Industrial development processes as well as research in physics, materials and engineering science rely on computer modelling and simulation techniques today. With increasing computer power, computations are carried out on multiple scales and involve the analysis of coupled problems. In this work, continuum modelling is therefore applied at different scales in order to facilitate a prediction of the effective material or structural behaviour based on the local morphology and the properties of the individual constituents. This provides valueable insight into the structure-property relations which are of interest for any design process. In order to obtain reasonable predictions for the effective behaviour, numerical models which capture the essential fine scale features are required. In this context, the efficient representation of discontinuities as they arise at, e.g. material interfaces or cracks, becomes more important than in purely phenomenological macroscopic approaches. In this work, two different approaches to the modelling of discontinuities are discussed: (i) a sharp interface representation which requires the localisation of interfaces by the mesh topology. Since many interesting macroscopic phenomena are related to the temporal evolution of certain microscopic features, (ii) diffuse interface models which regularise the interface in terms of an additional field variable and therefore avoid topological mesh updates are considered as an alternative. With the two combinations (i) Extended Finite Elemente Method (XFEM) + sharp interface model, and (ii) Isogeometric Analysis (IGA) + diffuse interface model, two fundamentally different approaches to the modelling of discontinuities are investigated in this work. XFEM reduces the continuity of the approximation by introducing suitable enrichment functions according to the discontinuity to be modelled. Instead, diffuse models regularise the interface which in many cases requires even an increased continuity that is provided by the spline-based approximation. To further increase the efficiency of isogeometric discretisations of diffuse interfaces, adaptive mesh refinement and coarsening techniques based on hierarchical splines are presented. The adaptive meshes are found to reduce the number of degrees of freedom required for a certain accuracy of the approximation significantly. Selected discretisation techniques are applied to solve a coupled magneto-mechanical problem for particulate microstructures of Magnetorheological Elastomers (MRE). In combination with a computational homogenisation approach, these microscopic models allow for the prediction of the effective coupled magneto-mechanical response of MRE. Moreover, finite element models of generic MRE microstructures are coupled with a BEM domain that represents the surrounding free space in order to take into account finite sample geometries. The macroscopic behaviour is analysed in terms of actuation stresses, magnetostrictive deformations, and magnetorheological effects. The results obtained for different microstructures and various loadings have been found to be in qualitative agreement with experiments on MRE as well as analytical results. / Industrielle Entwicklungsprozesse und die Forschung in Physik, Material- und Ingenieurwissenschaft greifen in einem immer stärkeren Umfang auf rechnergestützte Modellierungs- und Simulationsverfahren zurück. Die ständig steigende Rechenleistung ermöglicht dabei auch die Analyse mehrskaliger und gekoppelter Probleme. In dieser Arbeit kommt daher ein kontinuumsmechanischer Modellierungsansatz auf verschiedenen Skalen zum Einsatz. Das Ziel der Berechnungen ist dabei die Vorhersage des effektiven Material- bzw. Strukturverhaltens auf der Grundlage der lokalen Werkstoffstruktur und der Eigenschafen der konstitutiven Bestandteile. Derartige Simulationen liefern interessante Aussagen zu den Struktur-Eigenschaftsbeziehungen, deren Verständnis entscheidend für das Material- und Strukturdesign ist. Um aussagekräftige Vorhersagen des effektiven Verhaltens zu erhalten, sind numerische Modelle erforderlich, die wesentliche Eigenschaften der lokalen Materialstruktur abbilden. Dabei kommt der effizienten Modellierung von Diskontinuitäten, beispielsweise Materialgrenzen oder Rissen, eine deutlich größere Bedeutung zu als bei einer makroskopischen Betrachtung. In der vorliegenden Arbeit werden zwei unterschiedliche Modellierungsansätze für Unstetigkeiten diskutiert: (i) eine scharfe Abbildung, die üblicherweise konforme Berechnungsnetze erfordert. Da eine Evolution der Mikrostruktur bei einer derartigen Modellierung eine Topologieänderung bzw. eine aufwendige Neuvernetzung nach sich zieht, werden alternativ (ii) diffuse Modelle, die eine zusätzliche Feldvariable zur Regularisierung der Grenzfläche verwenden, betrachtet. Mit der Kombination von (i) Erweiterter Finite-Elemente-Methode (XFEM) + scharfem Grenzflächenmodell sowie (ii) Isogeometrischer Analyse (IGA) + diffuser Grenzflächenmodellierung werden in der vorliegenden Arbeit zwei fundamental verschiedene Zugänge zur Modellierung von Unstetigkeiten betrachtet. Bei der Diskretisierung mit XFEM wird die Kontinuität der Approximation durch eine Anreicherung der Ansatzfunktionen gemäß der abzubildenden Unstetigkeit reduziert. Demgegenüber erfolgt bei einer diffusen Grenzflächenmodellierung eine Regularisierung. Die dazu erforderliche zusätzliche Feldvariable führt oft zu Feldgleichungen mit partiellen Ableitungen höherer Ordnung und weist in ihrem Verlauf starke Gradienten auf. Die daraus resultierenden Anforderungen an den Ansatz werden durch eine Spline-basierte Approximation erfüllt. Um die Effizienz dieser isogeometrischen Diskretisierung weiter zu erhöhen, werden auf der Grundlage hierarchischer Splines adaptive Verfeinerungs- und Vergröberungstechniken entwickelt. Ausgewählte Diskretisierungsverfahren werden zur mehrskaligen Modellierung des gekoppelten magnetomechanischen Verhaltens von Magnetorheologischen Elastomeren (MRE) angewendet. In Kombination mit numerischen Homogenisierungsverfahren, ermöglichen die Mikrostrukturmodelle eine Vorhersage des effektiven magnetomechanischen Verhaltens von MRE. Außerderm wurden Verfahren zur Kopplung von FE-Modellen der MRE-Mikrostruktur mit einem Randelement-Modell der Umgebung vorgestellt. Mit Hilfe der entwickelten Verfahren kann das Verhalten von MRE in Form von Aktuatorspannungen, magnetostriktiven Deformationen und magnetischen Steifigkeitsänderungen vorhergesagt werden. Im Gegensatz zu zahlreichen anderen Modellierungsansätzen, stimmen die mit den hier vorgestellten Methoden für unterschiedliche Mikrostrukturen erzielten Vorhersagen sowohl mit analytischen als auch experimentellen Ergebnissen überein.
77

Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining Challenges

Persson, Magnus January 2009 (has links)
Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis. / Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen. / DySCAS
78

Quantitative Analysis of Configurable and Reconfigurable Systems

Dubslaff, Clemens 21 March 2022 (has links)
The often huge configuration spaces of modern software systems render the detection, prediction, and explanation of defects and inadvertent behaviors challenging tasks. Besides configurability, a further source of complexity is the integration of cyber-physical systems (CPSs). Behaviors in CPSs depend on quantitative aspects such as throughput, energy consumption, and probability of failure, which all play a central role in new technologies like 5G networks, tactile internet, autonomous driving, and the internet of things. The manifold environmental influences and human interactions within CPSs might also trigger reconfigurations, e.g., to ensure quality of service through adaptivity or fulfill user’s wishes by adjusting program settings and performing software updates. Such reconfigurations add yet another source of complexity to the quest of modeling and analyzing modern software systems. The main contribution of this thesis is a formal compositional modeling and analysis framework for systems that involve configurability, adaptivity through reconfiguration, and quantitative aspects. Existing modeling approaches for configurable systems are commonly divided into annotative and compositional approaches, both having complementary strengths and weaknesses. It has been a well-known open problem in the configurable systems community whether there is a hybrid approach that combines the strengths of both specification approaches. We provide a formal solution to this problem, prove its correctness, and show practical applicability to actual configurable systems by introducing a formal analysis framework and its implementation. While existing family-based analysis approaches for configurable systems mainly focused on software systems, we show effectiveness of such approaches also in the hardware domain. To explicate the impact of configuration options onto analysis results, we introduce the notion of feature causality that is inspired by the seminal counterfactual definition of causality by Halpern and Pearl. By means of several experimental studies, including a velocity controller of an aircraft system that required new techniques already for its analysis, we show how our notion of causality facilitates to identify root causes, to estimate the effects of features, and to detect feature interactions.:1 Introduction 2 Foundations 3 Probabilistic Configurable Systems 4 Analysis and Synthesis in Reconfigurable Systems 5 Experimental Studies 6 Causality in Configurable Systems 7 Conclusion
79

Adaptive Performance: Arbeitsleistung im Kontext von Veränderungen / Adaptive Performance: Job Performance in the Context of Change

Beuing, Ulrike 11 December 2009 (has links)
Die Arbeit beschäftigt sich mit der Adaptive Performance (AP) von Individuen. AP wird definiert als ein Verhalten, mit dem auf eine veränderte Arbeitssituation reagiert wird und das funktional für die Erreichung der Unternehmensziele ist. Nach einer Auseinandersetzung mit der Definition und Dimensionalität von AP erfolgt eine Abgrenzung zu verwandten Forschungsgebieten (z.B. Flexibilität, Kreativität, Routinen). Weiter wird ein Überblick über bisherige Paradigmen und Erkenntnisse der AP-Forschung gegeben. Da bislang kein Instrument mit guter psychometrischer Qualität zur Messung von AP verfügbar ist, beschäftigt sich die erste Studie (N=216 Leistungsbeurteilungen durch Vorgesetzte) mit der Konstruktion und Validierung eines solchen Instrumentes. Hypothesenkonform lässt sich die zweidimensionale Unterteilung von sozialer und aufgabenorientierter AP bestätigen. In der zweiten Studie (N=225 Selbsteinschätzungen durch Mitarbeiter) werden Außenzusammenhänge von AP thematisiert. Dabei zeigt sich, dass sich AP sowohl von geforderter Arbeitleistung als auch von Persönlicher Initiative als proaktiv-innovativer Arbeitsleistung abgrenzen lässt. Weiter ergeben sich positive Zusammenhänge mit Arbeitszufriedenheit und Lernzielorientierung sowie negative Zusammenhänge mit Veränderungsresistenz und Vermeidungs-Leistungszielorientierung. In der dritten Studie (N=70 Studierende) kommt mit dem Task-Change Paradigma ein experimentelles Design zur Untersuchung der AP zum Einsatz. Die Ergebnisse zeigen einen Haupteffekt kognitiver Fähigkeiten auf AP sowie eine Interaktion von Zielorientierung und kognitiven Fähigkeiten: Bei hohen kognitiven Fähigkeiten ist Lernzielorientierung leistungsförderlich, bei geringen kognitiven Fähigkeiten ist Lernzielorientierung hingegen hinderlich. Abschließend werden die Ergebnisse sowie die verwendeten Methoden der Arbeit kritisch diskutiert. Dabei werden zukünftige Forschungsfelder sowie praktische Implikationen angesprochen.
80

Rate-Adaptive Runlength Limited Encoding for High-Speed Infrared Communication

Funk, James Cyril 29 September 2005 (has links) (PDF)
My thesis will demonstrate that Rate Adaptive Runlength Limited encoding (RA-RLL) achieves high data rates with acceptable error rate over a wide range of signal distortion/attenuation, and background noise. RA-RLL has performance superior to other infrared modulation schemes in terms of bandwidth efficiency, duty cycle control, and synchronization frequency. Rate adaptive techniques allow for quick convergence of RA-RLL parameters to acceptable values. RA-RLL may be feasibly implemented on systems with non-ideal timing and digital synchronization.

Page generated in 0.0618 seconds