• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 10
  • 10
  • 8
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 125
  • 47
  • 29
  • 27
  • 19
  • 18
  • 18
  • 17
  • 15
  • 14
  • 12
  • 12
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Improving the performance of GPU-accelerated spatial joins

Hrstic, Dusan Viktor January 2017 (has links)
Data collisions have been widely studied by various fields of science and industry. Combing CPU and GPU for processing spatial joins has been broadly accepted due to the increased speed of computations. This should redirect efforts in GPGPU research from straightforward porting of applications to establishing principles and strategies that allow efficient mapping of computation to graphics hardware. As threads are executing instructions while using hardware resources that are available, impact of different thread organizations and their effect on spatial join performance is analyzed and examined in this report.Having new perspectives and solutions to the problem of thread organization and warp scheduling may contribute more to encourage others to program on the GPU side. The aim with this project is to examine the impact of different thread organizations in spatial join processes. The relationship between the items inside datasets are examined by counting the number of collisions their join produce in order to understand how different approaches may have an influence on performance. Performance benchmarking, analysis and measuring of different approaches in thread organization are investigated and analyzed in this report in order to find the most time efficient solution which is the purpose of the conducted work.This report shows the obtained results for the utilization of different thread techniques in order to optimize the computational speeds of the spatial join algorithms. There are two algorithms on the GPU, one implementing thread techniques and the other non-optimizing solution. The GPU times are compared with the execution times on the CPU and the GPU implementations are verified by observing the collision counters that are matching with all of the collision counters from the CPU counterpart.In the analysis part of this report the the implementations are discussed and compared to each other. It has shown that the difference between algorithm implementing thread techniques and the non-optimizing one lies around 80% in favour of the algorithm implementing thread techniques and it is also around 56 times faster then the spatial joins on the CPU. / Datakollisioner har studerats i stor utsträckning i olika områden inom vetenskap och industri. Att kombinera CPU och GPU för bearbetning av rumsliga föreningar har godtagits på grund av bättre prestanda. Detta bör omdirigera insatser i GPGPU-forskning från en enkel portning av applikationer till fastställande av principer och strategier som möjliggör en effektiv användning av grafikhårdvara. Eftersom trådar som exekverar instruktioner använder sig av hårdvaruresurser, förekommer olika effekter beroende på olika trådorganisationer. Deras på verkan på prestanda av rumsliga föreningar kommer att analyseras och granskas i denna rapport. Nya perspektiv och lösningar på problemet med trådorganisationen och schemaläggning av warps kan bidra till att fler uppmuntras till att använda GPU-programmering. Syftet med denna rapport är att undersöka effekterna av olika trådorganisationer i rumsliga föreningar. Förhållandet mellan objekten inom datamängder undersöks genom att beräkna antalet kollisioner som ihopslagna datamängder förorsakar. Detta görs för att förstå hur olika metoder kan påverka effektivitet och prestanda. Prestandamätningar av olika metoder inom trå dorganisationer undersö ks och analyseras fö r att hitta den mest tidseffektiva lösningen. I denna rapport visualiseras också det erhållna resultatet av olika trådtekniker som används för att optimera beräkningshastigheterna för rumsliga föreningar. Rapporten undersökeren CPU-algoritm och två GPU-algoritmer. GPU tiderna jämförs hela tiden med exekveringstiderna på CPU:n, och GPU-implementeringarna verifieras genom att jämföra antalet kollisioner från både CPU:n och GPU:n. Under analysdelen av rapporten jämförs och diskuteras olika implementationer med varandra. Det visade sig att skillnaden mellan en algoritm som implementerar trådtekniker och en icke-optimerad version är cirka 80 % till förmån för algoritmen som implementerar trådtekniker. Det visade sig också föreningarna på CPU:n att den är runt 56 gånger snabbare än de rumsliga
112

Crimean Rhetorical Sovereignty: Resisting A Deportation Of Identity

Berry, Christian 01 January 2013 (has links)
On a small contested part of the world, the peninsula of Crimea, once a part of the former Soviet Union, lives a people who have endured genocide and who have struggled to etch out an identity in a land once their own. They are the Crimean Tatar. Even their name, an exonym promoting the Crimeans’ “peripheral status” (Powell) and their ensuing “cultural schizophrenia” (Vizenor), bears witness to the otherization they have withstood throughout centuries. However, despite attempts to relegate them to the history books, Crimeans are alive and well in the “motherland,” but not without some difficulty. Having been forced to reframe their identities because of numerous imperialistic, colonialist, and soviet behavior and policies, there have been many who have resisted, first and foremost through rhetorical sovereignty, the ability to reframe Crimean Tatar identity through Crimean Tatar rhetoric. This negotiation of identity through rhetoric has included a fierce defense of their language and culture in what Malea Powell calls a “war with homogeneity,” a struggle for identification based on resistance. This thesis seeks to understand the rhetorical function of naming practices as acts that inscribe material meaning and perform marginalization or resistance within the context of Crimea-L, a Yahoo! Group listserv as well as immediate and remote Crimean history. To analyze the rhetoric of marginalization and resistance in naming practices, I use the Discourse Historical Approach (DHA) to Critical Discourse Analysis (CDA) within recently archived discourses. Ruth Wodak’s DHA strategies will be reappropriated as Naming Practice Strategies, depicting efforts in otherization or rhetorical sovereignty.
113

Human-centric process planningfor Plug & Produce : Digital threads connecting product design withautomated manufacturing

Nilsson, Anders January 2023 (has links)
Adaptations to a fluctuating market and intensified customer demands for unique products are a challenge for manufacturers. Manual manufacturing is still the most flexible, nevertheless, automation ensures stable quality, minimizes wear and tear of the operators, and contributes to a safer and better working environment as the distance between the operator and the process can be increased and screened off. Hence, the manufacturing industry is searching for human-centric automation solutions that are flexible enough to handle these challenges. Conventional automation is tailored for one or a few similar variants of products, in addition, increased flexibility implies increased complexity to handle. This licentiate thesis demonstrates a flexible Plug &amp; Produce automated manufacturing concept where the complexity is redirected to focus on the products and manufacturing processes by utilizing artificial intelligence. Together with digital threads that connect the product design to automatic manufacturing that enables manufacturing companies to manage new production scenarios with their in-house knowledge. Data is picked directly from the computer-based design of the products and process knowledge that normally exists within the manufacturing company is added through graphical user interfaces. The graphical configuration tools visualize the flow of sequential and parallel manufacturing operations together with process-bound information. Plug &amp; Produce relies on pluggable process modules with re-cyclical manufacturing resources that can be plugged in and out as needed. As an example, a module with a robot can be plugged in to help an existing robot and thereby balance the production capacity. In Plug &amp; Produce resources start working and cooperate with other resources automatically when they are plugged in. To achieve this, the resources are provided with distributed artificial intelligence together with intelligent products that know how to be finalized. In this concept, everything is digitally configurable by the in-house knowledge of the manufacturing companies. A Plug &amp; Produce test bed was built to verify the concept in cooperation with industrial representatives. / Denna licentiatavhandling påvisar ett koncept för att öka flexibiliteten och samtidigt rikta om komplexiteten i automatiserade produktionssystem hos tillverkande företag på ett sätt så att deras interna personal på egen hand kan ställa om tillverkningen mot nya produkter. Anpassningar till marknadens fluktuationer och efterfrågan av nya unika produkter är en ständigt pågående process. Alltmer av produktionen flyttas tillbaka till Sverige och övriga Europa vilket ökar efterfrågan på flexibel och omställbar automation. Automation håller nere prisnivån då arbetskraften är dyr, säkerhetsställer jämn kvalité, minimerar förslitningsskador på de anställda och bidrar till säkrare och trevligare arbetsmiljö då distansen mellan operatör och process kan ökas och avskärmas. Produktion som flyttas till hemmamarknaden från låglöneländer ersätter ofta högflexibel och anpassningsbar manuell tillverkning vilket är en stor utmaning för industrin. Ett Plug &amp; Produce koncept för automatiserad tillverkning utvecklas och beskrivs i denna avhandling där automationen enkelt kan ställas om av den interna personalen och anpassas till nya produkter. Omställning med hjälp egen personal möjliggörs genom att så mycket information som möjligt utvinns från produktens datorbaserade design. Processkunskap som normalt besitts inom det tillverkande företaget adderas därtill med hjälp av grafiska användarinterface som visar flödet av tillverkningsoperationer tillsammans med processpecifika uppgifter såsom mått, bearbetningshastigheter, temperaturer och färg. Plug &amp; Produce system är uppbyggda kring processmoduler med tillverkningsresurser som kan pluggas in och ut efter behov. Till exempel kan en modul med en robot pluggas in för att avlasta befintlig robot och därmed öka produktionshastigheten. Specialdesignade resurser kan pluggas in för att öka effektiviteten och minimera energikonsumtionen. För att den inpluggade processmodulen självmant skall börja jobba och samarbeta med de andra modulerna är den försedd med egen lokal artificiell intelligens. Dessa processmoduler kan tack vare sin intelligens pluggas in i olika Plug &amp; Produce system och är därmed återvinningsbara i nya system. Intelligensen kan vara lokalt placerad i en dator på resursen eller i datormolnet kopplat till resursen. På samma sätt kan produkterna förses med intelligens och kallas då för smarta produkter. Dessa produkter har som mål att bli färdigproducerade genom delmål i form av tillverkningsoperationer. Denna intelligens förses med kunskap och erfarenheter av personalen inom det tillverkande företaget genom användarvänliga interface. När användarvänligheten Plug &amp; Produce testbädd har byggts upp tillsammans med representanter frånprefabricerade trähusindustrin. Tillverkning av prefabricerade trähus är i idag ihög grad manuell då existerande automationslösningar inte är flexibla nog eftersom husen är i hög grad är kundanpassade. Arbetet som beskrivs i denna avhandling gynnar trähusindustrin och därmed klimatet då trä binder kol för en lång tid framåt. / <p>Paper A is not included in the digital licentiate thesis due to copyright . </p>
114

MIGRATING FROM A VAX/VMS TO AN INTEL/WINDOWS-NT BASED GROUND STATION

Penna, Sergio D., Rios, Domingos B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Upgrading or replacing production systems is always a very resource-consuming task, in particular if the systems being replaced are quite specialized, such as those serving any Flight Test Ground Station. In the recent past a large number of Ground Station systems were based in Digital’s VAX/VMS architecture. The computer industry then expanded very fast and by 1990 realtime PCM data processing systems totally dependent on hardware and software designed for IBM-PC compatible micro-computers were becoming available. A complete system replacement in a typical Ground Station can take from one to several years to become a reality. It depends on how complex the original system is, how complex the resulting system needs to be, how much resources are available to support the operation, how soon the organization needs it, etc. This paper intends to review the main concerns encountered during the replacement of a typical VAX/VMS-based by an Intel-Windows NT-based Ground Station. It covers the transition from original requirements to totally new requirements, from mini-computers to micro-computers, from DMA to high-speed LAN data transfers, while conserving some key architectural features. This 8-month development effort will expand EMBRAER’s capability in acquiring, processing and archiving PCM data in the next few years at a lower cost, while preserving compatibility with old legacy flight test data.
115

Exekveringsmiljö för Plex-C på JVM / Run-time environment for Plex-C on JVM

Möller, Johan January 2002 (has links)
The Ericsson AXE-based systems are programmed using an internally developed language called Plex-C. Plex-C is normally compiled to execute on an Ericsson internal processor architecture. A transition to standard processors is currently in progress. This makes it interesting to examine if Plex-C can be compiled to execute on the JVM, which would make it processor independent. The purpose of the thesis is to examine if parts of the run-time environment of Plex-C can be translated to Java and if this can be done so that sufficient performance is obtained. It includes how language constructions in Plex-C can be translated to Java. The thesis describes how a limited part of the Plex-C run-time environment is implemented in Java. Optimizations are an important part of the implementation. It is also described how the JVM system was tested with a benchmark test. The test results indicate that the implemented system is a few times faster than the Ericsson internal processor architecture. But this performance is still not sufficient for the JVM system to be an interesting replacement for the currently used processor architecture. It might still be useful as a processor independent test platform.
116

Measuring member contribution impact in an online community / Mesure de l'impact de la contribution des membres d’une communauté en ligne

Takeda, Hirotoshi 29 September 2015 (has links)
La communauté en ligne (CEL) est une forme très répandue de transfert de connaissances spécialisées, où des usagers géographiquement dispersés peuvent constituer une communauté en partageant des idées, envoyant et affichant des messages, débattant de sujets et nouant des amitiés en ligne. Un des problèmes avec ces CEL est leur durabilité, car leur apparition et leur croissance initiale sont suivies d’une phase de stagnation où les usagers cessent d’afficher des commentaires, ce qui mène la communauté à mourir par manque d’activité. Tenter de prolonger la phase dynamique de croissance d’une CEL est un sujet pertinent pour tout administrateur de CEL. Une façon de maintenir le dynamisme d’une CEL est d’encourager les contributions.Ce courant de recherche se penche sur la façon dont les CEL peuvent prolonger leur phase dynamique, en considérant différents aspects, en particulier les mesures des contributions des usagers et la manière dont les nouveaux usagers d’une CEL se comportent. Je propose d’utiliser différentes mesures pour évaluer les contributions des usagers. Une des mesures pour identifier les contributeurs très actifs est une mesure bibliométrique non-invasive basée sur l’indice de Hirsch. Un autre aspect de cette recherche concerne la façon dont les nouveaux usagers se comportent et comment cela peut être expliqué par l’attachement préférentiel.Ce courant de recherche se penche sur la façon dont les CEL peuvent prolonger leur phase dynamique, en considérant différents aspects, en particulier les mesures des contributions des usagers et la manière dont les nouveaux usagers d’une CEL se comportent. Je propose d’utiliser différentes mesures pour évaluer les contributions des usagers. Une des mesures pour identifier les contributeurs très actifs est une mesure bibliométrique non-invasive basée sur l’indice de Hirsch. Un autre aspect de cette recherche concerne la façon dont les nouveaux usagers se comportent et comment cela peut être expliqué par l’attachement préférentiel. / The online community (OC) is a popular form of specialized knowledge transfer, where geographically dispersed users can for a community by sharing ideas, send and post messages, debate topics, and forge online friendships. One of the problems with OC’s is that they tend to have a life cycle, where there is the birth and growth of the OC but then there is a stagnant stage where users stop posting to the OC and the community eventually dies due to inactivity. Trying to extend the vibrant growth stage of an OC is a relevant topic for any administrator of an OC. One way that an OC can stay vibrant is to encourage contributions.This research stream will look at how OC’s can keep their vibrancy for a longer period of time, by looking at various aspects of OC’s such as measures of user contribution and how new users in an OC behave. I propose to use different measures to evaluate users contributions to an OC. One of these measures is a non-invasive bibliometric measure using the Hirsch-index methodology as a way to identify high-level contributors. Another stream of this research will look at how new users behave and how this might be explained by preferential attachment.
117

High Performance by Exploiting Information Locality through Reverse Computing / Hautes Performances en Exploitant la Localité de l'Information via le Calcul Réversible.

Bahi, Mouad 21 December 2011 (has links)
Les trois principales ressources du calcul sont le temps, l'espace et l'énergie, les minimiser constitue un des défis les plus importants de la recherche de la performance des processeurs.Dans cette thèse, nous nous intéressons à un quatrième facteur qui est l'information. L'information a un impact direct sur ces trois facteurs, et nous montrons comment elle contribue ainsi à l'optimisation des performances. Landauer a montré que c’est la destruction - logique - d’information qui coûte de l’énergie, ceci est un résultat fondamental de la thermodynamique en physique. Sous cette hypothèse, un calcul ne consommant pas d’énergie est donc un calcul qui ne détruit pas d’information. On peut toujours retrouver les valeurs d’origine et intermédiaires à tout moment du calcul, le calcul est réversible. L'information peut être portée non seulement par une donnée mais aussi par le processus et les données d’entrée qui la génèrent. Quand un calcul est réversible, on peut aussi retrouver une information au moyen de données déjà calculées et du calcul inverse. Donc, le calcul réversible améliore la localité de l'information. La thèse développe ces idées dans deux directions. Dans la première partie, partant d'un calcul, donné sous forme de DAG (graphe dirigé acyclique), nous définissons la notion de « garbage » comme étant la taille mémoire – le nombre de registres - supplémentaire nécessaire pour rendre ce calcul réversible. Nous proposons un allocateur réversible de registres, et nous montrons empiriquement que le garbage est au maximum la moitié du nombre de noeuds du graphe.La deuxième partie consiste à appliquer cette approche au compromis entre le recalcul (direct ou inverse) et le stockage dans le contexte des supercalculateurs que sont les récents coprocesseurs vectoriels et parallèles, cartes graphiques (GPU, Graphics Processing Unit), processeur Cell d’IBM, etc., où le fossé entre temps d’accès à la mémoire et temps de calcul ne fait que s'aggraver. Nous montons comment le recalcul en général, et le recalcul inverse en particulier, permettent de minimiser la demande en registres et par suite la pression sur la mémoire. Cette démarche conduit également à augmenter significativement le parallélisme d’instructions (Cell BE), et le parallélisme de threads sur un multicore avec mémoire et/ou banc de registres partagés (GPU), dans lequel le nombre de threads dépend de manière importante du nombre de registres utilisés par un thread. Ainsi, l’ajout d’instructions du fait du calcul inverse pour la rematérialisation de certaines variables est largement compensé par le gain en parallélisme. Nos expérimentations sur le code de Lattice QCD porté sur un GPU Nvidia montrent un gain de performances atteignant 11%. / The main resources for computation are time, space and energy. Reducing them is the main challenge in the field of processor performance.In this thesis, we are interested in a fourth factor which is information. Information has an important and direct impact on these three resources. We show how it contributes to performance optimization. Landauer has suggested that independently on the hardware where computation is run information erasure generates dissipated energy. This is a fundamental result of thermodynamics in physics. Therefore, under this hypothesis, only reversible computations where no information is ever lost, are likely to be thermodynamically adiabatic and do not dissipate power. Reversibility means that data can always be retrieved from any point of the program. Information may be carried not only by the data but also by the process and input data that generate it. When a computation is reversible, information can also be retrieved from other already computed data and reverse computation. Hence reversible computing improves information locality.This thesis develops these ideas in two directions. In the first part, we address the issue of making a computation DAG (directed acyclic graph) reversible in terms of spatial complexity. We define energetic garbage as the additional number of registers needed for the reversible computation with respect to the original computation. We propose a reversible register allocator and we show empirically that the garbage size is never more than 50% of the DAG size. In the second part, we apply this approach to the trade-off between recomputing (direct or reverse) and storage in the context of supercomputers such as the recent vector and parallel coprocessors, graphical processing units (GPUs), IBM Cell processor, etc., where the gap between processor cycle time and memory access time is increasing. We show that recomputing in general and reverse computing in particular helps reduce register requirements and memory pressure. This approach of reverse rematerialization also contributes to the increase of instruction-level parallelism (Cell) and thread-level parallelism in multicore processors with shared register/memory file (GPU). On the latter architecture, the number of registers required by the kernel limits the number of running threads and affects performance. Reverse rematerialization generates additional instructions but their cost can be hidden by the parallelism gain. Experiments on the highly memory demanding Lattice QCD simulation code on Nvidia GPU show a performance gain up to 11%.
118

Paralelní trénování neuronových sítí pro rozpoznávání řeči / Parallel Training of Neural Networks for Speech Recognition

Veselý, Karel January 2010 (has links)
This thesis deals with different parallelizations of training procedure for artificial neural networks. The networks are trained as phoneme-state acoustic descriptors for speech recognition. Two effective parallelization strategies were implemented and compared. The first strategy is data parallelization, where the training is split into several POSIX threads. The second strategy is node parallelization, which uses CUDA framework for general purpose computing on modern graphic cards. The first strategy showed a 4x speed-up, while using the second strategy we observed nearly 10x speed-up. The Stochastic Gradient Descent algorithm with error backpropagation was used for the training. After a short introduction, the second chapter of this thesis shows the motivation and introduces the neural networks into the context of speech recognition. The third chapter is theoretical, the anatomy of a neural network and the used training method are discussed. The following chapters are focused on the design and implementation of the project, while the phases of the iterative development are described. The last extensive chapter describes the setup of the testing system and reports the experimental results. Finally, the obtained results are concluded and the possible extensions of the project are proposed.
119

Systémové řešení bezpečnosti informací v organizaci / Systematic Solution for Information Security in Organisation

Palička, Jan January 2017 (has links)
This diploma thesis deals with ISMS implementation in Netcope Technologies, a. s., which is involved in the production of network cards for high speed acceleration. This thesis is divided into two logical parts. In the first part the theoretical basis information is presented, including selected methods for implementing information security. In the second part, the analysis of the company and the proposed measures are presented.
120

Fault Detection, Isolation and Recovery : Analysis of two scheduling algorithms

Capitanu, Calin January 2021 (has links)
Unmanned, as well as manned space missions have seen a high failure rate in the early era of space technology. However, this decreased a lot since technology advanced and engineers learnt from previous experiences and improved critical real time systems with fault detection mechanisms. Fault detection, isolation and recovery, nowadays, is generally available in every flying device. However, the cost of hardware can bottleneck the process of creating such a system that is both robust and responsive. This thesis analyses the possibility of implementing a fault detection, isolation and recovery system inside of a single-threaded, cooperative scheduling operating system. The thesis suggests a cooperative implementation of such a system, where every task is responsible for parts of the fault detection. The analysis is done from both the integration layer, across the operating system and its tasks, as well as from the inside of the detection system, where two key components are implemented and analyzed: debug telemetry and operation modes. Results show that it is possible to implement a fault detection system that is spread across all the components of the satellite and acts cooperatively. Furthermore, the comparison with a traditional, dedicated fault detection system proves that errors can be caught faster with a cooperative mechanism. / Obemannade såväl som bemannade rymduppdrag har sett ett högt misslyckande i rymdteknikens tidiga era. Detta har dock förbättrats mycket sedan ingenjörer började lära sig av sina tidigare erfarenheter och utrustade kritiska realtidssystem med feldetekteringsmekanismer. Idag är alla flygande enheter utrustade med feldetekterings-, isolerings- och återställningsmekanismer. Däremot kan kostnaden för hårdvara vara ett problem för processen att skapa ett sådant system som är både robust och mottagligt. Denna uppsats analyserar möjligheten att implementera ett feldetekterings-, isolerings- och återställningssystem inuti ett enkelgängat samarbetsplaneringssystem. Denna uppsats föreslår ett samarbete för implementering av ett sådant system, där varje uppgift ansvarar för delar av feldetekteringen. Analysen görs från både integrationsskiktet, över operativsystemet och dess uppgifter, samt från insidan av detekteringssystemet, där två nyckelkomponenter implementeras och analyseras. Resultaten visar att det är möjligt att implementera ett feldetekteringssystem som täcker alla satellitkomponenter och som är mottaglig. Dessutom visar jämförelsen med ett traditionellt, dedikerat feldetekteringssystem att fel kan fångas snabbare med en mottagligmekanism. / Misiunile spat,iale cu oameni, atât cât s, i fara oameni, au avut o rata a es, ecurilor destul de ridicata în perioada init,iala a erei tehnologiei spat,iale. În schimb, aceasta a scazut semnificativ odata cu dezvoltarea tehnologiei, dar s, i datorita faptului ca inginerii au învat,at din experient,ele precendente s, i au îmbunatat, it sistemele critice în timp real cu mecanisme de detect,ie a erorilor. Sisteme de detect,ie, izolare s, i recuperare din erori sunt disponibile astazi în aproape toate sistemele spat,iale. Însa, costul echipamentelor poate împiedica crearea unor astfel de sisteme de detect,ie, care sa fie robuste s, i responsive. Aceasta teza analizeaza posibilitatea implementarii unui sistem de detect,ie, izolare s, i recuperare de la erori într-un satelit care este echipat cu un procesor cu un singur fir de execut,ie, care are un sistem de planificare cooperativ în sistemul de operare. Aceasta teza sugereaza o implementare cooperativa a unui astfel de sistem, unde fiecare proces este responsabil de câte o parte din detectarea erorilor. Analiza este realizata atât din perspectiva integrarii în sistemul de operare s, i procesele acestuia, cât s, i din interiorul acestui sistem de detect,ie, unde doua elemente importante sunt implementate s, i analizate: telemetria de depanare s, i modurile de operare. Rezultatele arata faptul ca este posibila implementarea unui sistem de detect,ie care este împart, it în toate componentele sistemului unui satelit s, i se comporta cooperativ. Mai departe, comparat,ia cu un sistem tradit,ional, dedicat, de detect,ie a erorilor arata ca erorile pot fi detectate mai rapid cu un sistem cooperativ.

Page generated in 0.3372 seconds