• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Získávání servisních informací ze současných terminálů mobilních sítí GSM a UMTS, postupy servisu mobilních terminálů / Acquisition of service information from current terminals of GSM and UMTS cell networks, service procedures for mobile terminals

Kříž, Jakub January 2008 (has links)
This thesis concentrates on the UMTS cellular network and the possibilities of its monitoring. There are several methods of monitoring the UMTS. The technique used in this project is based on monitoring through a cellular terminal. The theoretical part is devoted to the description of the UMTS system and the WCDMA technique. The practical part then deals with the method of UMTS monitoring through a cellular terminal and describes in detail individual screens of the FTD (Field Test Display) program, the Netmonitor functions and their parameters. The next part of this thesis analyses the RRC messages, which were recorded during the realization of services (video calls, calls, data transfers) in the UMTS network. The last chapters of the thesis are briefly dealing with software and hardware servicing of cellular terminals. The attachment then offers two lab tasks, in which the students get acquainted with the UMTS network structure and the behaviour of the cellular terminal in this network.
2

Fuzzing Radio Resource Control messages in 5G and LTE systems : To test telecommunication systems with ASN.1 grammar rules based adaptive fuzzer / Fuzzing Radio Resource Control-meddelanden i 5Goch LTE-system

Potnuru, Srinath January 2021 (has links)
5G telecommunication systems must be ultra-reliable to meet the needs of the next evolution in communication. The systems deployed must be thoroughly tested and must conform to their standards. Software and network protocols are commonly tested with techniques like fuzzing, penetration testing, code review, conformance testing. With fuzzing, testers can send crafted inputs to monitor the System Under Test (SUT) for a response. 3GPP, the standardization body for the telecom system, produces new versions of specifications as part of continuously evolving features and enhancements. This leads to many versions of specifications for a network protocol like Radio Resource Control (RRC), and testers need to constantly update the testing tools and the testing environment. In this work, it is shown that by using the generic nature of RRC specifications, which are given in Abstract Syntax Notation One (ASN.1) description language, one can design a testing tool to adapt to all versions of 3GPP specifications. This thesis work introduces an ASN.1 based adaptive fuzzer that can be used for testing RRC and other network protocols based on ASN.1 description language. The fuzzer extracts knowledge about ongoing RRC messages using protocol description files of RRC, i.e., RRC ASN.1 schema from 3GPP, and uses the knowledge to fuzz RRC messages. The adaptive fuzzer identifies individual fields, sub-messages, and custom data types according to specifications when mutating the content of existing messages. Furthermore, the adaptive fuzzer has identified a previously unidentified vulnerability in Evolved Packet Core (EPC) of srsLTE and openLTE, two open-source LTE implementations, confirming the applicability to robustness testing of RRC and other network protocols. / 5G-telekommunikationssystem måste vara extremt tillförlitliga för att möta behoven för den kommande utvecklingen inom kommunikation. Systemen som används måste testas noggrant och måste överensstämma med deras standarder. Programvara och nätverksprotokoll testas ofta med tekniker som fuzzing, penetrationstest, kodgranskning, testning av överensstämmelse. Med fuzzing kan testare skicka utformade input för att övervaka System Under Test (SUT) för ett svar. 3GPP, standardiseringsorganet för telekomsystemet, producerar ofta nya versioner av specifikationer för att möta kraven och bristerna från tidigare utgåvor. Detta leder till många versioner av specifikationer för ett nätverksprotokoll som Radio Resource Control (RRC) och testare behöver ständigt uppdatera testverktygen och testmiljön. I detta arbete visar vi att genom att använda den generiska karaktären av RRC-specifikationer, som ges i beskrivningsspråket Abstract Syntax Notation One (ASN.1), kan man designa ett testverktyg för att anpassa sig till alla versioner av 3GPP-specifikationer. Detta uppsatsarbete introducerar en ASN.1-baserad adaptiv fuzzer som kan användas för att testa RRC och andra nätverksprotokoll baserat på ASN.1- beskrivningsspråk. Fuzzer extraherar kunskap om pågående RRC meddelanden med användning av protokollbeskrivningsfiler för RRC, dvs RRC ASN.1 schema från 3GPP, och använder kunskapen för att fuzz RRC meddelanden. Den adaptiva fuzzer identifierar enskilda fält, delmeddelanden och anpassade datatyper enligt specifikationer när innehållet i befintliga meddelanden muteras. Dessutom har den adaptiva fuzzer identifierat en tidigare oidentifierad sårbarhet i Evolved Packet Core (EPC) för srsLTE och openLTE, två opensource LTE-implementeringar, vilket bekräftar tillämpligheten för robusthetsprovning av RRC och andra nätverksprotokoll.
3

Analysis and Development of Error-Job Mapping and Scheduling for Network-on-Chips with Homogeneous Processors

Karlsson, Erik January 2010 (has links)
<p>Due to increased complexity of today’s computer systems, which are manufactured in recent semiconductor technologies, and the fact that recent semiconductor technologies are more liable to soft errors (non-permanent errors) it is inherently difficult to ensure that the systems are and will remain error-free. Depending on the application, a soft error can have serious consequences for the system. It is therefore important to detect the presence of soft errors as early as possible and recover from the erroneous state and maintain correct operation. There is an entire research area devoted on proposing, implementing and analyzing techniques that can detect and recover from these errors, known as fault tolerance. The drawback of using faulttolerance is that it usually introduces some overhead. This overhead may be for instance redundant hardware, which increases the cost of the system, or it may be a time overhead that negatively impacts on system performance. Thus a main concern when applying fault tolerance is to minimize the imposed overhead while the system is still able to deliver the correct error-free operation. In this thesis we have analyzed one well known fault tolerant technique, Rollback-Recovery with Checkpointing (RRC). This technique is able to detect and recover from errors by taking and storing checkpoints during the execution of a job.Therefore we can think as if a job is divided into a number of execution segments and a checkpoint is taken after executing each execution segment. This technique requires the job to be concurrently executed on two processors. At each checkpoint, both processors exchange data, which contains enough information for the job’s state. The exchanged data are then compared. If the data differ, it means that an error is detected in the previous execution segment and it is therefore re-executed. If the exchanged data are the same, it means that no errors are detected and the data are stored as a safe point from which the job can be restarted later. A time overhead due to exchanging data between processors is therefore introduced, and it increases the average execution time of a job, i.e. the average time required for a given job to complete. The overhead depends on the number of links that has to be traversed (due to data exchange) after each execution segment and the number of execution segments that are needed for the given job. The number of links that has to be traversed after each execution segment is twice the distance between the processors that are executing the same job concurrently. However, this is only true if all the links are fully functional. A link failure can result in a longer route for communication between the processors. Even though all links arefully functional, the number of execution segments still depends on error-free probabilities of the processors, and these error-free probabilities can vary between processors. This implies that the choice of processors affects the total number of links the communication has to traverse. Choosing two processors with higher error-free probability further away from eachother increases the distance, but decreases the number of execution segments, which can result in a lower overhead. By carefully determining the mapping for a given job, one can decrease the overhead, hence decreasing the average execution time. Since it is very common to have a larger number of jobs than available resources, it is not only important to find a good mapping to decrease the average execution time for a whole system, but also a good order of execution for a given set jobs (scheduling of the jobs). We propose in this thesis several mapping and scheduling algorithms that aim to reduce the average execution time in a fault-tolerant multiprocessor System-on-Chip, which uses Network-on-Chip as an underlying interconnect architecture, so that the fault-tolerant technique (RRC) can perform efficiently.</p>
4

Extraction of radio frequency quality metric from digital video broadcast streams by cable using software defined radio

Eriksson, Viktor January 2013 (has links)
The purpose of this master thesis was to investigate how effiecient the extractionof radiofrequency quality metrics from digital video broadcast (DVB) streamscan become using software defined radio. Software defined radio (SDR) is a fairlynew technology that offers you the possibility of very flexible receivers and transmitters where it is possible to upgrade the modulation and demodulation overtime. Agama is interested in SDR for use in the Agama Analyzer, a widely deployedmonitoring probe running on top of standard services. Using SDR, Agama coulduse that in all deployments, such as DVB by cable/terrestrial/satellite (DVBC/T/S), which would simplify logistics. This thesis is an implementation of a SDR to be able to receive DVB-C. TheSDR must perform a number of adaptive algorithms in order to prevent the received symbols from being significantly different from the transmitted ones. Themain parts of the SDR include timing recovery, carrier recovery and equalization.Timing recovery performs synchronization between the transmitted and receivedsymbols and the carrier recovery performs synchronization between the carrierwave of the transmitter and the local oscillator in the receiver. The thesis discusses various methods to perform the different types of synchronizations andequalizations in order to find the most suitable methods.
5

Analysis and Development of Error-Job Mapping and Scheduling for Network-on-Chips with Homogeneous Processors

Karlsson, Erik January 2010 (has links)
Due to increased complexity of today’s computer systems, which are manufactured in recent semiconductor technologies, and the fact that recent semiconductor technologies are more liable to soft errors (non-permanent errors) it is inherently difficult to ensure that the systems are and will remain error-free. Depending on the application, a soft error can have serious consequences for the system. It is therefore important to detect the presence of soft errors as early as possible and recover from the erroneous state and maintain correct operation. There is an entire research area devoted on proposing, implementing and analyzing techniques that can detect and recover from these errors, known as fault tolerance. The drawback of using faulttolerance is that it usually introduces some overhead. This overhead may be for instance redundant hardware, which increases the cost of the system, or it may be a time overhead that negatively impacts on system performance. Thus a main concern when applying fault tolerance is to minimize the imposed overhead while the system is still able to deliver the correct error-free operation. In this thesis we have analyzed one well known fault tolerant technique, Rollback-Recovery with Checkpointing (RRC). This technique is able to detect and recover from errors by taking and storing checkpoints during the execution of a job.Therefore we can think as if a job is divided into a number of execution segments and a checkpoint is taken after executing each execution segment. This technique requires the job to be concurrently executed on two processors. At each checkpoint, both processors exchange data, which contains enough information for the job’s state. The exchanged data are then compared. If the data differ, it means that an error is detected in the previous execution segment and it is therefore re-executed. If the exchanged data are the same, it means that no errors are detected and the data are stored as a safe point from which the job can be restarted later. A time overhead due to exchanging data between processors is therefore introduced, and it increases the average execution time of a job, i.e. the average time required for a given job to complete. The overhead depends on the number of links that has to be traversed (due to data exchange) after each execution segment and the number of execution segments that are needed for the given job. The number of links that has to be traversed after each execution segment is twice the distance between the processors that are executing the same job concurrently. However, this is only true if all the links are fully functional. A link failure can result in a longer route for communication between the processors. Even though all links arefully functional, the number of execution segments still depends on error-free probabilities of the processors, and these error-free probabilities can vary between processors. This implies that the choice of processors affects the total number of links the communication has to traverse. Choosing two processors with higher error-free probability further away from eachother increases the distance, but decreases the number of execution segments, which can result in a lower overhead. By carefully determining the mapping for a given job, one can decrease the overhead, hence decreasing the average execution time. Since it is very common to have a larger number of jobs than available resources, it is not only important to find a good mapping to decrease the average execution time for a whole system, but also a good order of execution for a given set jobs (scheduling of the jobs). We propose in this thesis several mapping and scheduling algorithms that aim to reduce the average execution time in a fault-tolerant multiprocessor System-on-Chip, which uses Network-on-Chip as an underlying interconnect architecture, so that the fault-tolerant technique (RRC) can perform efficiently.
6

Determining Interstellar Reddening Using Intrinsic Colors of C- Type RR-Lyrae Variables

Anderson, Tyler 08 November 2012 (has links)
No description available.
7

Améliorations de l'accès paquet en sens montant du WCDMA

Dimou, Konstantinos 18 December 2003 (has links) (PDF)
Les systèmes de 3G offrent de nouveaux services support (bearer services) à plus hauts débits pour les modes de transmission "paquet". Ces services vont coexister avec la voix (ou d'autres services temps réels), des scénarios de trafic mixte, voix et données, doivent être envisagés. La norme UMTS permet effectivement aux utilisateurs d'avoir plus d'un service activé simultanément. Les différentes classes de trafic augmentent la complexité de la gestion des ressources radios. Dans ce contexte, deux types de fonctions sont étudiés: l'allocation de TFCI et l'ordonnancement de paquets. Leur impact sur la qualité de service (QoS) ainsi que sur la capacité du système est évalué. On propose des améliorations de ces mécanismes dans le but d'augmenter la capacité du système et par conséquent d'améliorer la QoS des utilisateurs. Les études se restreignent au sens montant, c'est à dire aux transmissions du mobile (User Equipment ou UE) vers le réseau. Un premier mécanisme pour lequel un effort d'amélioration est fait, est l'adaptation du lien radio par variation du débit instantané transmis. On simule le cas d'une transmission multiservice (voix et données). L'UE doit partager un débit global qui lui est alloué entre les différents services activés. Ces derniers sont véhiculés dans des radio bearers (tuyaux supports). À chaque intervalle élémentaire de transmission (Transmission Time Interval, TTI), l'UE sélectionne un sous-débit pour chaque bearer; ceci se fait par la sélection d'un "format de transport" à appliquer pendant la durée TTI. Cette procédure s'effectue dans la couche MAC (Medium Access Control); le résultat de la sélection est une combinaison de formats de transport (Transport Format Combination, TFC) que la couche physique doit utiliser. La procédure, nommée sélection de TFC, permet d'adapter la transmission des différents services aux conditions variables de la propagation radio: elle détermine notablement la performance de transmission. L'algorithme de sélection de TFC est tracé dans ses grandes lignes dans la norme. Un de ses principes est de favoriser le trafic temps réel au détriment des services de données par paquet. Cependant, le trafic temps réel peut être perturbé par le trafic de données sous certaines conditions, en particulier pour les mobiles éloignés de la station de base (Node B). On propose un algorithme de sélection de TFC qui limite ces perturbations et qui offre une plus large zone de couverture aux services temps réels. En plus, il améliore la QoS du service de données et le débit effectif de l'UE sans augmenter sa puissance de transmission. Un autre type d'études concerne l'ordonnancement de paquets entre les différents utilisateurs ou UEs. C'est une procédure qui est contrôlée par la partie fixe du réseau. Nous l'étudions principalement par simulation en considérant divers mécanismes ou variations. Un premier mécanisme est nommé fast Variable Spreading Factor (fast VSF): les UEs distants changent rapidement leur facteur d'étalement (SF) afin de conserver une puissance de transmission constante, ce qui vise à stabiliser l'interférence inter-cellulaire. Un deuxième mécanisme étudié est un accès paquet décentralisé (decentralized mode) utilisant une information en retour sur le niveau global d'interférence dans la cellule. Un troisième mécanisme nommé "fast scheduling" (ordonnancement rapide) raccourcit le cycle d'ordonnancement. Les résultats ont montré que dans le cas de faible ou moyenne charge dans la cellule, le mode décentralisé réduit le délai par paquet jusqu'à 25 %. L'ordonnancement rapide augmente la capacité du système jusqu' à 10%. En plus, il améliore la QoS perçue par les utilisateurs en terme de débit par utilisateur et délai par paquet transmis.
8

Diffusion and Flow on Microscopic Length Scales Studied with Fluorescence Correlation Spectroscopy / Diffusion und Fluss auf mikroskopischen Längenskalen untersucht mit Fluoreszenzkorrelationsspektroskopie

Pieper, Christoph Michael 23 October 2012 (has links)
No description available.
9

Implementace a vyhodnocení komunikační technologie LTE Cat-M1 v simulačním prostředí NS-3 / Implementation of the LTE Cat-M1 Communication Technology Using the Network Simulator 3

Maslák, Roman January 2021 (has links)
The Diploma work deals with the implementation of LTE Cat-M technology in the simulation tool Network Simulator 3 (NS-3). The work describe LPWA technologies and their use cases. In first are described the main parts of the Internet of Things (IoT) and Machine-to-Machine (M2M) communication. Subsequently are described and defined the most used technologies in LPWA networks. Technologies which are used in the LPWA networks are Sigfox, LoRaWAN, Narrowband IoT (NB-IoT) and Long Term Evolution for Machines (LTE Cat-M), where LTE Cat-M technology is described in more details. Simulations are simulated in simulation tool NS-3 and use LENA module. In NS 3 tool are simulated Simulations, which give us informations of Network state according to different Network set up. At the end are done changes of Radio Resource Control (RRC) states in NS-3 tool. These changes are required for correct implementation LTE Cat-M technology in NS-3 tool. Then we are able to simulate simulations, which meet to definition of LTE Cat-M technology.
10

Spektroskopische Erfassung der Gastemperatur im Brennraum von Ottomotoren / Spectroscopic Aquisition of the Gas Temperature within the Otto Engine

Müller, Ralf 17 December 2009 (has links)
No description available.

Page generated in 0.2167 seconds