• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 36
  • 13
  • 8
  • 6
  • 6
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 235
  • 235
  • 235
  • 70
  • 46
  • 42
  • 39
  • 36
  • 36
  • 34
  • 31
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

OFFLINE SCHEDULING OF TASK SETS WITH COMPLEX END-TO-END DELAY CONSTRAINTS

Holmberg, Jonas January 2017 (has links)
Software systems in the automotive domain are generally safety critical and subject to strict timing requirements. Systems of this character are often constructed utilizing periodically executed tasks, that have a hard deadline. In addition, these systems may have additional deadlines that can be specified on cause-effect chains, or simply task chains. They are defined by existing tasks in the system, hence the chains are not stand alone additions to the system. Each chain provide an end-to-end timing constraint targeting the propagation of data through the chain of tasks. These constraints specify the additional timing requirements that need to be fulfilled, when searching for a valid schedule. In this thesis, an offline non-preemptive scheduling method is presented, designed for single core systems. The scheduling problem is defined and formulated utilizing Constraint Programming. In addition, to ensure that end-to-end timing requirements are met, job-level dependencies are considered during the schedule generation. Utilizing this approach can guarantee that individual task periods along with end-to-end timing requirements are always met, if a schedule exists. The results show a good increase in schedulability ratio when utilizing job-level dependencies compared to the case where job-level dependencies are not specified. When the system utilization increases this improvement is even greater. Depending on the system size and complexity the improvement can vary, but in many cases it is more than double. The scheduling generation is also performed within a reasonable time frame. This would be a good benefit during the development process of a system, since it allows fast verification when changes are made to the system. Further, the thesis provide an overview of the entire process, starting from a system model and ending at a fully functional schedule executing on a hardware platform.
182

Provisão integrada de QoS relativa e absoluta em serviços computacionais interativos com requisitos de responsividade de tempo real / Integrated provision of relative and absolute QoS in interative computer services with real-time responsiveness requirements

Priscila Tiemi Maeda Saito 04 March 2010 (has links)
Aplicações de sistemas computacionais emergentes atribuindo requisitos de resposta na forma de tempo de resposta requerem uma abordagem de sistemas de tempo real. Nesses sistemas, a qualidade de serviço é expressa como garantia das restrições temporais. Um amplo leque de técnicas para provisão de QoS encontram-se na literatura. Estas técnicas são baseadas tanto na diferenciação de serviço (QoS relativa), quanto na especificação de garantia de desempenho (QoS absoluta). Porém, a integração de QoS relativa e absoluta em nível de aplicação não tem sido tão explorada. Este trabalho realiza o estudo, a análise e a proposta de um método de escalonamento de tempo real em um ambiente simulado, baseado em contratos virtuais adaptativos e modelo re-alimentado. O objetivo é relaxar as restrições temporais dos usuários menos exigentes e priorizar usuários mais exigentes, sem degradar a qualidade do sistema como um todo. Para tanto, estratégias são exploradas em nível de escalonamento para o cumprimento dos contratos especificados por requisitos de tempo médio de resposta. Os resultados alcançados com o emprego do método proposto sinalizam uma melhoria em termos de qualidade de serviço relativa e absoluta e uma melhor satisfação dos usuários. Este trabalho também propõe uma extensão para os modelos convencionalmente estudados nesse contexto, ampliando a formulação original de duas classes para n classes de serviços / Emerging computer system application posing responsiveness requirement in the form of response time demand a real-time system approach. In these systems, the quality of service is expressed as guarantees on time constraints. A wide range of techniques for QoS provision is found in the literature. These techniques are based both on either service differentiation (relative QoS) or specification of performance guaranteeS (absolute QoS). However, integrated provision of both relative and absolute QoS at application level is not as well explored. This work conducts the study, analysis and proposal of a real time scheduling method in a simulated environment. This method is based on adaptive virtual contracts and feedback model. The goal is to relax the time constraints of less demanding users and prioritize the time constraints of most demanding users, without degrading the quality of the system as a whole. Strategies toward this goal are exploited in the system scheduling level and are aimed at the problem of fulfulling service-level agreements specifying average response times requirements. The results achieved with the proposed method indicate an improvement in relative and absolute QoS and a better user satisfaction. This work also proposes an extension to the conventional models studied in this context, extending the original formulation of two classes for n classes of services
183

A simulation-based approach to test the performance of large-scale real time software systems

Waqas, Muhammad January 2020 (has links)
Background: A real-time system operates with time constraints, and its correctness depends upon the time on which results are generated. Different industries use different types of real-time systems such as telecommunication, air traffic control systems, generation of power, and spacecraft system. There is a category of real-time systems that are required to handle millions of users and operations at the same time; those systems are called large scale real-time systems. In the telecommunication sector, many real-time systems are large scale, as they need to handle millions of users and resources in parallel. Performance is an essential aspect of this type of system; unpredictable behavior queue cost millions of dollars for telecom operators in a matter of seconds. The problem is that existing models for performance analysis of these types of systems are not cost-effective and require lots of knowledge to deploy. In this context, we have developed a performance simulator tool that is based on the XgBoost, Random Forest, and Decision Tree modeling. Objectives: The thesis aims to develop a cost-effective approach to support the analysis of the performance of large-scale real-time telecommunication systems. The idea is to develop and implement a solution to simulate the telecommunication system using some of the most promising identified factors that affect the performance of the system. Methods: We have performed an improvement case study in Ericsson. The identification of performance factors is found through a dataset generated in a performance testing session, the investigation conducted on the same system, and unstructured interviews with the system experts. The approach was selected through a literature review. Validation of the Performance Simulator performed through static analysis and user feedback received from the questionnaire. Results: The results show that Performance Simulator could be helpful to performance analysis of large-scale real-time telecommunication systems. Performance Simulator ability to endorse performance analysis of other real-time systems is a collection of multiple opinions. Conclusions: The developed and validated approach demonstrates potential usefulness in performance analysis and can benefit significantly from further enhancements. The specific amount of data used for training might impact the generalization of the research on other real-time systems. In the future, this study can establish with more numbers of input on real-time systems on a large scale.
184

Deployment of mixed criticality and data driven systems on multi-cores architectures / Déploiement de systèmes à flots de données en criticité mixte pour architectures multi-coeurs

Medina, Roberto 30 January 2019 (has links)
De nos jours, la conception de systèmes critiques va de plus en plus vers l’intégration de différents composants système sur une unique plate-forme de calcul. Les systèmes à criticité mixte permettent aux composants critiques ayant un degré élevé de confiance (c.-à-d. une faible probabilité de défaillance) de partager des ressources de calcul avec des composants moins critiques sans nécessiter des mécanismes d’isolation logicielle.Traditionnellement, les systèmes critiques sont conçus à l’aide de modèles de calcul comme les graphes data-flow et l’ordonnancement temps-réel pour fournir un comportement logique et temporel correct. Néanmoins, les ressources allouées aux data-flows et aux ordonnanceurs temps-réel sont fondées sur l’analyse du pire cas, ce qui conduit souvent à une sous-utilisation des processeurs. Les ressources allouées ne sont ainsi pas toujours entièrement utilisées. Cette sous-utilisation devient plus remarquable sur les architectures multi-cœurs où la différence entre le meilleur et le pire cas est encore plus significative.Le modèle d’exécution à criticité mixte propose une solution au problème susmentionné. Afin d’allouer efficacement les ressources tout en assurant une exécution correcte des composants critiques, les ressources sont allouées en fonction du mode opérationnel du système. Tant que des capacités de calcul suffisantes sont disponibles pour respecter toutes les échéances, le système est dans un mode opérationnel de « basse criticité ». Cependant, si la charge du système augmente, les composants critiques sont priorisés pour respecter leurs échéances, leurs ressources de calcul augmentent et les composants moins/non critiques sont pénalisés. Le système passe alors à un mode opérationnel de « haute criticité ».L’ intégration des aspects de criticité mixte dans le modèle data-flow est néanmoins un problème difficile à résoudre. Des nouvelles méthodes d’ordonnancement capables de gérer des contraintes de précédences et des variations sur les budgets de temps doivent être définies.Bien que plusieurs contributions sur l’ordonnancement à criticité mixte aient été proposées, l’ordonnancement avec contraintes de précédences sur multi-processeurs a rarement été étudié. Les méthodes existantes conduisent à une sous-utilisation des ressources, ce qui contredit l’objectif principal de la criticité mixte. Pour cette raison, nous définissons des nouvelles méthodes d’ordonnancement efficaces basées sur une méta-heuristique produisant des tables d’ordonnancement pour chaque mode opérationnel du système. Ces tables sont correctes : lorsque la charge du système augmente, les composants critiques ne manqueront jamais leurs échéances. Deux implémentations basées sur des algorithmes globaux préemptifs démontrent un gain significatif en ordonnançabilité et en utilisation des ressources : plus de 60 % de systèmes ordonnançables sur une architecture donnée par rapport aux méthodes existantes.Alors que le modèle de criticité mixte prétend que les composants critiques et non critiques peuvent partager la même plate-forme de calcul, l'interruption des composants non critiques réduit considérablement leur disponibilité. Ceci est un problème car les composants non critiques doivent offrir une degré minimum de service. C’est pourquoi nous définissons des méthodes pour évaluer la disponibilité de ces composants. A notre connaissance, nos évaluations sont les premières capables de quantifier la disponibilité. Nous proposons également des améliorations qui limitent l’impact des composants critiques sur les composants non critiques. Ces améliorations sont évaluées grâce à des automates probabilistes et démontrent une amélioration considérable de la disponibilité : plus de 2 % dans un contexte où des augmentations de l’ordre de 10-9 sont significatives.Nos contributions ont été intégrées dans un framework open-source. Cet outil fournit également un générateur utilisé pour l’évaluation de nos méthodes d’ordonnancement. / Nowadays, the design of modern Safety-critical systems is pushing towards the integration of multiple system components onto a single shared computation platform. Mixed-Criticality Systems in particular allow critical components with a high degree of confidence (i.e. low probability of failure) to share computation resources with less/non-critical components without requiring software isolation mechanisms (as opposed to partitioned systems).Traditionally, safety-critical systems have been conceived using models of computations like data-flow graphs and real-time scheduling to obtain logical and temporal correctness. Nonetheless, resources given to data-flow representations and real-time scheduling techniques are based on worst-case analysis which often leads to an under-utilization of the computation capacity. The allocated resources are not always completely used. This under-utilization becomes more notorious for multi-core architectures where the difference between best and worst-case performance is more significant.The mixed-criticality execution model proposes a solution to the abovementioned problem. To efficiently allocate resources while ensuring safe execution of the most critical components, resources are allocated in function of the operational mode the system is in. As long as sufficient processing capabilities are available to respect deadlines, the system remains in a ‘low-criticality’ operational mode. Nonetheless, if the system demand increases, critical components are prioritized to meet their deadlines, their computation resources are increased and less/non-critical components are potentially penalized. The system is said to transition to a ‘high-criticality’ operational mode.Yet, the incorporation of mixed-criticality aspects into the data-flow model of computation is a very difficult problem as it requires to define new scheduling methods capable of handling precedence constraints and variations in timing budgets.Although mixed-criticality scheduling has been well studied for single and multi-core platforms, the problem of data-dependencies in multi-core platforms has been rarely considered. Existing methods lead to poor resource usage which contradicts the main purpose of mixed-criticality. For this reason, our first objective focuses on designing new efficient scheduling methods for data-driven mixed-criticality systems. We define a meta-heuristic producing scheduling tables for all operational modes of the system. These tables are proven to be correct, i.e. when the system demand increases, critical components will never miss a deadline. Two implementations based on existing preemptive global algorithms were developed to gain in schedulability and resource usage. In some cases these implementations schedule more than 60% of systems compared to existing approaches.While the mixed-criticality model claims that critical and non-critical components can share the same computation platform, the interruption of non-critical components degrades their availability significantly. This is a problem since non-critical components need to deliver a minimum service guarantee. In fact, recent works in mixed-criticality have recognized this limitation. For this reason, we define methods to evaluate the availability of non-critical components. To our knowledge, our evaluations are the first ones capable of quantifying availability. We also propose enhancements compatible with our scheduling methods, limiting the impact that critical components have on non-critical ones. These enhancements are evaluated thanks to probabilistic automata and have shown a considerable improvement in availability, e.g. improvements of over 2% in a context where 10-9 increases are significant.Our contributions have been integrated into an open-source framework. This tool also provides an unbiased generator used to perform evaluations of scheduling methods for data-driven mixed-criticality systems.
185

Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining Challenges

Persson, Magnus January 2009 (has links)
Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis. / Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen. / DySCAS
186

Verification of real time properties in Fiacre language / Vérification des propriétés temps réel dans le langage Fiacre

Abid, Nouha 11 December 2012 (has links)
Dans cette thèse, nous nous intéressons à la problématique de la vérification formelle des systèmes critiques temps réel, c’est-à-dire des systèmes dont l’exécution dépend de certaines contraintes temporelles. La spécification formelle des exigences pour de tels systèmes, ainsi que leur vérification, reste une tâche très compliquée, surtout pour les non experts. Plusieurs solutions ont été proposées pour faciliter la spécification et la vérification des systèmes temps-réels. Un premier type d’approche est basée sur la définition d’un ensemble de patrons de spécification qui représentent les propriétés les plus utilisées en pratique. Cependant, ce type de solutions n’est pas toujours supporté par un outillage de vérification efficace, dans le sens que les auteurs de ces langages de patrons ne fournissent pas directement une implantation pour leur langage. Un second type d’approches repose sur l’utilisation du formalisme des logiques temporelles pour spécifier les propriétés à vérifier et sur les techniques de model-checking pour leur vérification. S’agissant de systèmes temps-réels, il est dans ce cas nécessaire d’utiliser des extensions temporisées des logiques temporelles. Cependant, ces approches donnent le plus souvent lieu à des problèmes de model-checking qui sont indécidable, ou dont la complexité en pratique est très élevée. Dans ce travail, nous suivons la première approche et proposons un langage de patrons de propriétés temps-réels accompagnés d’un outil de vérification par model- checking. Nous apportons plusieurs contributions à ce domaine. Nous proposons un cadre théorique complet pour la spécification et la vérification de patrons de propriétés temps réel. Notre approche a été implantée dans le contexte du langage de modélisation Fiacre. Enfin, nous définissons deux méthodes complémentaires permettant de vérifier la correction de notre approche de vérification / The formal verification of critical, reactive systems is a very complicated task, especially for non experts. In this work, we more particularly address the problem of real time systems, that is in the situation where the correctness of the system depends upon timing constraints, such as the “timeliness” of some interactions. Many solutions have been proposed to ease the specification and the verification of such systems. An interesting approach—that we follow in this thesis—is based on the definition of specification patterns, that is sets of general, reusable templates for commonly occurring classes of properties. However, patterns are rarely implemented, in the sense that the designers of specification languages rarely provide an effective verification method for checking a pattern on a system. The most common technique is to rely on a timed extension of a temporal logic to define the semantics of patterns and then to use a model-checker for this logic. However, this approach may be inadequate, in particular if patterns require the use of a logic associated to an undecidable model-checking problem or to an algorithm with a very high practical complexity. We make several contributions. We propose a complete theoretical framework to specify and check real time properties on the formal model of a system. First, our framework provides a set of real time specification patterns. We provide a verification technique based on the use of observers that has been implemented in a tool for the Fiacre modelling language. Finally, we provide two methods to check the correctness of our verification approach; a “semantics”—theoretical— method as well as a “graphical”-practical- method
187

Real-time Code Generation in Virtualizing Runtime Environments

Däumler, Martin 03 March 2015 (has links)
Modern general purpose programming languages like Java or C# provide a rich feature set and a higher degree of abstraction than conventional real-time programming languages like C/C++ or Ada. Applications developed with these modern languages are typically deployed via platform independent intermediate code. The intermediate code is typically executed by a virtualizing runtime environment. This allows for a high portability. Prominent examples are the Dalvik Virtual Machine of the Android operating system, the Java Virtual Machine as well as Microsoft .NET’s Common Language Runtime. The virtualizing runtime environment executes the instructions of the intermediate code. This introduces additional challenges to real-time software development. One issue is the transformation of the intermediate code instructions to native code instructions. If this transformation interferes with the execution of the real-time application, this might introduce jitter to its execution times. This can degrade the quality of soft real-time systems like augmented reality applications on mobile devices, but can lead to severe problems in hard real-time applications that have strict timing requirements. This thesis examines the possibility to overcome timing issues with intermediate code execution in virtualizing runtime environments. It addresses real-time suitable generation of native code from intermediate code in particular. In order to preserve the advantages of modern programming languages over conventional ones, the solution has to adhere to the following main requirements: - Intermediate code transformation does not interfere with application execution - Portability is not reduced and code transformation is still transparent to a programmer - Comparable performance Existing approaches are evaluated. A concept for real-time suitable code generation is developed. The concept bases on a pre-allocation of the native code and the elimination of indirect references, while considering and optimizing startup time of an application. This concept is implemented by the extension of an existing virtualizing runtime environment, which does not target real-time systems per se. It is evaluated qualitatively and quantitatively. A comparison of the new concept to existing approaches reveals high execution time determinism and good performance and while preserving the portability deployment of applications via intermediate code.
188

Model-Level Timing Analysis for UML-RT Capsules

Ståhlbom, Niclas January 2023 (has links)
Real-time systems surround every facet of our lives. They can be found in anything from everyday objects like mobile phones and washing machines to objects critical to life and infrastructure including heart rate monitors and nuclear power plants. As time progresses these systems are becoming ever more complex. To cope with the increase in complexity, developers and researchers are turning to model driven development as a solution. One modeling language aimed specifically at real-time systems is UML-RT. This thesis proposes an algorithm that for a significant subset of UML-RT is able to provide a worst-case execution time analysis for capsules at a model level. Having access to the worst-case execution times allows developers to at an early stage reason about a given system. This allows for better resource allocation as well as the ability to perform scheduling analysis. Development of the algorithm was performed iteratively using the constructive research approach. We began by first gaining an understanding of the theory. We then successively developed a theoretical algorithm selecting one or a few UML-RT entities at a time. With each iteration the algorithm was redefined to incorporate the new entities. At the end of the development, we created an implementation of the algorithm as an Eclipse Modeling Framework plug-in using Java. We then created a set of hard coded capsule models which were used to validate the algorithm.
189

Medium Access Control and Networking Protocols for the Intra-Body Network

Stucki, Eric Thomas 05 March 2006 (has links) (PDF)
Biomedical applications offer an exciting growth opportunity for wireless sensor networks. However, radio frequency communication is problematic in hospital environments that are susceptible to interference in the industrial, scientific, and medical (ISM) bands. Also, RF is inherently insecure as eavesdroppers can easily pick up signals. The Intra-Body Network (IBNet) proposes a novel communication model for biomedical sensor networks. It seeks the convenience of wireless communication while avoiding interference and privacy concerns associated with RF. IBNet's solution is to utilize a subject's own body tissue as a transmission medium. Assuming that transmissions are contained within the body, IBNet solves otherwise complex problems of privacy and interference. Unfortunately, transmitting through the same medium in which we sense creates a new type of conflict; it is possible that one sensor's network transmission might corrupt an adjacent sensor's sample data. We present Body Language, a set of protocols that arbitrate IBNet's sampling/communication conflict while providing basic services such as dynamic node discovery, network configuration, quality of service, and sensor sample collection. Body Language seeks to provide these services and solve IBNet's unique communication challenges while minimizing hardware resource requirements and hence sensor node cost. In order to prove Body Language feasibility, we created an IBNet prototype environment where the protocols were demonstrated on real hardware and in real time. The prototype also offers important insight into the Body Language's computational resource requirements. Our results show that Body Language provides all services required by IBNet and it does so with a very modest footprint.
190

Diseño y desarrollo de una arquitectura de Internet de las Cosas de nueva generación orientada al cálculo y predicción de índices compuestos aplicada en entornos reales

Lacalle Úbeda, Ignacio 12 December 2022 (has links)
[ES] El Internet de las Cosas (IoT) ha experimentado un gran crecimiento en los últimos años. El incremento en el número de dispositivos, una mayor miniaturización de la capacidad de computación y las técnicas de virtualización, han favorecido su adopción en la industria y en otros sectores. Asimismo, la introducción de nuevas tecnologías (como la Inteligencia Artificial, el 5G, el Tactile Internet o la Realidad Aumentada) y el auge del edge computing preparan el terreno, y formulan los requisitos, para lo que se conoce como Internet de las Cosas de Nueva Generación (NGIoT). Estos avances plantean nuevos desafíos tales como el establecimiento de arquitecturas que cubran dichas necesidades y a la vez resulten flexibles, escalables y prácticas para implementar servicios que aporten valor a la sociedad. En este sentido, el IoT puede resultar un elemento clave para el establecimiento de políticas y la toma de decisiones. Una herramienta muy útil para ello es la definición y cálculo de indicadores compuestos, que representan un impacto en un fenómeno real a través de un único valor. La generación de estos indicadores es un aspecto promovido por entidades oficiales como la Unión Europea, aunque su automatización y uso en entornos de tiempo real es un campo poco explorado. Este tipo de índices deben seguir una serie de operaciones matemáticas y formalidades (normalización, ponderación, agregación¿) para ser considerados válidos. Esta tesis doctoral plantea la unión de ambos campos en alza, proponiendo una arquitectura de Internet de las Cosas de nueva generación orientada al servicio de cálculo y predicción de indicadores compuestos. Partiendo de la experiencia del candidato en proyectos de investigación europeos y regionales, y construyendo sobre tecnologías open source, se ha incluido el diseño, desarrollo e integración de los módulos de dicha arquitectura (adquisición de datos, procesamiento, visualización y seguridad) como parte de la tesis. Dichos planteamientos e implementaciones se han validado en cinco escenarios diferentes, cubriendo cinco índices compuestos en entornos con requisitos dispares siguiendo una metodología diseñada durante este trabajo. Los casos de uso están centrados en aspectos de sostenibilidad en entornos urbano y marítimo-portuario, pero se ha destacado que la solución puede ser extrapolada a otros sectores ya que ha sido diseñada de una manera agnóstica. El resultado de la tesis ha sido, además, analizado desde el punto de vista de transferencia tecnológica. Se ha propuesto la formulación de un producto, así como una posible financiación en fases de madurez más avanzadas y su potencial explotación como elemento comercializable / [CA] La Internet de les Coses (IoT) ha experimentat un gran creixement en els últims anys. L'increment en el nombre de dispositius, una major miniaturització de la capacitat de computació i les tècniques de virtualització, han afavorit la seua adopció en la indústria i en altres sectors. Així mateix, la introducció de noves tecnologies (com la Intel·ligència Artificial, el 5G, la Internet Tàctil o la Realitat Augmentada) i l'auge del edge computing preparen el terreny, i formulen els requisits, per al que es coneix com a Internet de les Coses de Nova Generació (NGIoT). Aquests avanços plantegen nous desafiaments com ara l'establiment d'arquitectures que cobrisquen aquestes necessitats i resulten, alhora, flexibles, escalables i pràctiques per a implementar serveis que aporten valor a la societat. Ací, el IoT pot resultar un element clau per a l'establiment de polítiques i la presa de decisions. Una eina molt útil en aquest sentit és la definició i càlcul d'indicadors compostos, que representen un impacte en un fenomen real a través d'un únic valor. La generació d'aquests indicadors és un aspecte promogut per entitats oficials com la Unió Europea, encara que la seua automatització i ús en entorns de temps real és un camp poc explorat. Aquest tipus d'índexs han de seguir una sèrie d'operacions matemàtiques i formalitats (normalització, ponderació, agregació¿) per a ser considerats vàlids. Aquesta tesi doctoral planteja la unió de tots dos camps en alça, proposant una arquitectura d'Internet de les Coses de nova generació orientada al servei de càlcul i predicció d'indicadors compostos. Partint de l'experiència del candidat en projectes d'investigació europeus i regionals, i construint sobre tecnologies open source, s'ha inclòs el disseny, desenvolupament i integració dels mòduls d'aquesta arquitectura (adquisició de dades, processament, visualització i seguretat) com a part de la tesi. Aquests plantejaments i implementacions s'han validat en cinc escenaris diferents, cobrint cinc índexs compostos en entorns amb requisits dispars seguint una metodologia dissenyada durant aquest treball. Els casos d'ús estan centrats en aspectes de sostenibilitat en entorns urbà i marítim-portuari, però s'ha destacat que la solució pot ser extrapolada a altres sectors ja que ha sigut dissenyada d'una manera agnòstica. El resultat de la tesi ha sigut, a més, analitzat des del punt de vista de transferència tecnològica. S'ha proposat la formulació d'un producte, així com un possible finançament en fases de maduresa més avançades i la seua potencial explotació com a element comercialitzable / [EN] The Internet of Things (IoT) has experienced tremendous growth in recent years. The increase in the number of devices, greater miniaturization of computing capacity and virtualization techniques have favored its adoption in industry and other sectors. Likewise, the introduction of new technologies (such as Artificial Intelligence, 5G, Tactile Internet or Augmented Reality), together with the rise of edge computing, are paving the way, and formulating the requirements, for what is known as the Next Generation Internet of Things (NGIoT). These advances pose new challenges such as the establishment of proper architectures that meet those needs and, at the same time, are flexible, scalable, and practical for implementing services that bring value to society. In this sense, IoT could be a key element for policy and decision making. A very useful tool for this is the definition and calculation of composite indicators, which represent an impact on a real phenomenon through a single value. The generation of these indicators is an aspect promoted by official entities such as the European Union, although their automation and use in real-time environments is a rather uncharted research field. This type of indexes must follow a series of mathematical operations and formalities (normalization, weighting, aggregation...) to be considered valid. This doctoral thesis proposes the union of both fields, proposing a new generation Internet of Things architecture oriented to the calculation and prediction of composite indicators. Based on the candidate's experience in European and regional research projects, and building on open source technologies, the design, development and integration of the modules of such architecture (data acquisition, processing, visualization and security) has been included as part of the thesis. These approaches and implementations have been validated in five different scenarios, covering five composite indexes in environments with disparate requirements following a methodology designed during this work. The use cases are focused on sustainability aspects in urban and maritime-port environments, but it has been highlighted that the solution can be extrapolated to other sectors as it has been designed in an agnostic way. The result of the thesis has also been analyzed from the point of view of technology transfer. A tentative product definition has been formulated, as well as a possible financing in more advanced stages of maturity and its potential exploitation as a marketable element / Lacalle Úbeda, I. (2022). Diseño y desarrollo de una arquitectura de Internet de las Cosas de Nueva Generación orientada al cálculo y predicción de índices compuestos aplicada en entornos reales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/190634

Page generated in 0.0604 seconds