• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 10
  • Tagged with
  • 58
  • 58
  • 58
  • 26
  • 17
  • 15
  • 11
  • 11
  • 9
  • 8
  • 4
  • 4
  • 3
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Fulfilling efficiently SLA availability guarantees in backbone networks

Tiwari, Prakriti January 2014 (has links)
The availability and reliability of backbone networks is important tosociety. However, physical, software and unintentional human errorfailures affect the links and nodes in a backbone network. To overcomesuch failures in the network, recovery mechanisms such as Protectionand Restoration are utilized. Additionally, a concept of Service LevelAgreement (SLA) is introduced between the provider and the user whichdefines and guarantees the network availability requirements and penaltyschemes. In this thesis, fulfilling the SLA availability guarantee efficientlyin a backbone network is investigated.This thesis focuses on the problem of handling end-to-end path failureson backbone networks. Some of the popular existing recovery mechanismsto handle such failures are Dedicated Backup Path Protection (DBPP) andPath Restoration (PR). A high percentage of network survivability canbe achieved by DBPP with a reserved backup path for each provisionedconnection. Unfortunately, it is very costly and resource demanding.Whereas, a PR based solution consumes only the needed resources but itis very slow to recover from failure which might effect the SLA availabilityguarantee. The work in this thesis aims at providing a hybrid networkrecovery model that combines the benefits of both DBPP and PR. Thehybrid model switches between DBPP and PR according to the SLAavailability requirement over a contract period and the current networkconnection state (i.e. the remaining time of the SLA and current sum ofdowntimes (accumulated downtime)).Moreover, an analysis in the failure logs of UNINETT’s backbonenetwork is made to model the probability distribution of the accumulateddowntime that uses PR. A distribution fitting is made for modeling theconnection downtime data taken from UNINETT’s backbone networkwhere Weibull distribution proved to be a good approximation. Additionally,a model for distribution of accumulated downtime that usesDBPP for both non-simultaneous and simultaneous failures of the workingpath and backup path is provided. An in-depth explanation of howthese distributions models can be used in the design of hybrid models ispresented.Two hybrid models were approached in this thesis. The first hybridapproach used the DBPP scheme at the beginning of the SLA durationand then it switches to PR when the calculated SLA risk assessmentshows that the probability of violating the SLA requirement is lower attime t. The second hybrid approach used the PR scheme at the beginningof the SLA duration and then it switches to DBPP when the accumulateddowntime at time t reach near to the threshold of the SLA risk targetsuch that the probability of violating the SLA requirement is higher.The transition line which decides the switching between PR and DBPPare computed for each hybrid approach using the results obtained fromthe accumulated downtime distribution model of PR and DBPP. Thetransition line defined in this thesis provides information about when theconnection should switch between Protection and Restoration mechanismby knowing the network connection state. The computed transition lineswith a 1 percent SLA risk target is verified via discrete event simulationin DEMOS. The SLA risk target is the probability of failing the SLA,however the provider can tune the risk target by using an advancednetwork recovery mechanism (e.g Protection) for more or less time. Thesimulation results showed that the proposed hybrid models work well,fulfilling the SLA availability guarantee efficiently with respect to theresource utilization. In addition, the results also revealed that usingthe PR scheme at the beginning of the SLA contract provides threetimes better resource utilization than using the DBPP scheme at thebeginning. Cost analysis for network providers are made with differentSLA risk targets in order to find the optimal SLA risk target for networkproviders. The results from analysis suggested that the total cost fornetwork providers decreases with the increase of SLA risk target untilthe total cost reaches its minimum, then it starts to increase again.The result of this thesis might contribute to future research on developinga hybrid model to reach particular performance objectives incommunication networks.
32

Analysis of Key Industrial WSN MAC Protocols

Koh, Kenneth Johannesen January 2014 (has links)
This paper looks at two MAC protocols for wireless sensor networksfor use in industrial applications developed at the Ubicom Lab, at theUniversity of Ulsan. A theoretical comparison of the MAC protocolsare performed to understand more about the benefits and downsidesto both of them, and experimental scenarios to validate the theoreticalanalysis are suggested. Theoretical analysis suggests that BigMAC hasan advantage in environments with high interference and frequent linkbreaks, while I-MAC has an advantage when the network topology isstable.
33

Point to point wireless audio with limited bandwidth and processing

Brenden, Even Steen January 2012 (has links)
This project investigates the possibility of implementing a point-to-point wireless audio transmission system for real-time operation on a specific embedded platform. The platform is originally intended for wireless PC peripherals such as keyboards and mouses and uses a proprietary radio module for transmission. The report accompanying the project can be used as a guide for doing a similar project on other platforms. The report is organized as follows. Properties of the platform are presented, and an assessment on what how it can perform in terms of computational power is carried out. ADPCM is chosen as a compression scheme and its underlying theory is briefly presented. As the platform does not do native floating point operations, fixed point number representation and operations are discussed. Error concealment for packet transmission is briefly discussed. An assessment on the performance of the on-chip analog-to-digital converter is carried out and an approach to implementing a 1-bit digital-to-analog converter using pulse-width modulation is discussed. Implementation issues when designing a real-time audio communication system are discussed. Performance tests are carried out, and finally a set of real-world applications for the system are presented. The project finds that a point-to-point wireless audio transmission system for half-duplex, real-time operation is possible for the given platform. Because of radio limitations, full-duplex operation is yet to be proven as possible. A project exists that investigates the radio module for the same platform in details. For confidentiality, the name of the platform is not referenced.
34

A framework for the simulation and validation of acoustic fields from medical ultrasound transducers

Bakstad, Ole January 2012 (has links)
The unified simulation framework for medical ultrasound, FieldSim, cur- rently supports linear and non-linear simulations by using Field II and Aber- sim, respectively. In this thesis the quasi-linear simulation tool, Propose, is incorporated in FieldSim and verified. The implementation uses Field II to generate the initial pressure propagated by Propose. It is shown to pro- duce satisfactory results when compared to standalone Propose, Field II and Abersim for both the fundamental and second harmonic. The results are also verified in the water tank. The running time is found to be slower than standalone Propose, but still substantially quicker than Abersim for non-linear simulations. By combining core features of the FieldSim frame- work and Propose new features are presented. It is now possible to easily simulate the second harmonic from a transducer with a measured impulse response and arbitrary excitation pulse using Propose in minutes, compared to hours with Abersim. By rotating the initial field steering can now be achieved in Propose in few lines of MATLAB code. A link between a research scanner and the FieldSim framework is pre- sented. When finalized a FieldSim simulation can be converted to a file read by PyTexo making it possible to use the exact same setup for both simu- lations and measurements in the water tank. The current implementation supports single ultrasound beams and B-mode with fixed focus.
35

Reliable Broadcast Contribution over the Public Internet

Markman, Martin Alexander Jarnfeld, Tokheim, Stian January 2012 (has links)
Broadcast contribution is point-point media transfer from recording sites to local editing studios, between studio facilities and to distribution centers. The contribution phase has strict Quality of Service (QoS) requirements to reliability and bandwidth - any error might degrade end users Quality of Experience (QoE) in the consecutive distribution phase. Dedicated IP contribution networks has become the preferred technology for contribution from content creation sites. Occasionally, however, contribution happens at a site without access to a dedicated IP contribution network - in which case the broadcaster must utilize less optimal technologies. Based on IP, the public Internet may be a superior solution in some scenarios due to high bandwidth and geographical coverage - if the internet can conform to strict contribution requirements to QoS and QoE. This thesis attempts to give a clear answer to this question. Our investigation was done in three parts. First, we uncovered recent Internet QoS trends in Norway. We found that the internet has become an Internet video delivery platform, which in turn has resulted in bandwidth increase in access networks. Bandwidth in residential access links now conforms to contribution requirements. ISPs make profit according to the level of offered QoS, broadcasters can therefore expect high QoS. Also, broadcasters can buy QoS guarantees, which may be a viable and safe solution. Secondly, we recorded over 21 hours of Internet QoS statistics on a connection traversing 11 routers and one peering point. The measured level of every QoS metric (packet loss, jitter and re-ordering) conformed to professional contribution network requirements, except the rate of packet loss bursts. However, no burst above 200 ms was recorded and no two consecutive bursts happened within a 2 second time frame. Based on this, we explained how simple error control strategies can correct or mask packet loss burst with a 200-250 ms delay tradeoff and 15-30% bandwidth overhead. Third, we did subjective tests in the Internet with two professional JPEG2000 contribution gateways delivered by T-Vips. A full movie was encoded at 70 Mbit/s, a bitrate used for very high quality contribution, and shown to a test panel of 24 participants. By analyzing questionnaires, we proved that contribution over the internet yield equally good QoE as cable TV. Also, we found that noticeable degradations due to packet loss happened once per hour on average. Furthermore, packet loss bursts below 4 ms was generally not visible to the viewers. Because the Internet provides both the required QoS and QoE, we concluded that broadcasters can do contribution over the Internet at the required level of quality whenever this is a favorable option.
36

Improving the Visual Experience When Coding Computer Generated Content Using the H.264 Standard

Berthelsen, Nicolai January 2011 (has links)
The purpose of this Master thesis was to improve the visual experience when coding computer generated content (CGC) using the H.264 standard. As H.264 is designed primarily to code natural video, it exhibits weaknesses when coding CGC at low bit rates. The thesis has focused on identifying and modifying the components in the H.264 algorithm responsible for the occurrence of unwanted noise artifacts. The research method was based on performing quantitative research to confirm or deny the hypothesis claiming that the H.264 algorithm performs sub-optimally when coding CGC. Experiments were conducted using coders written specically for the thesis. The results from these experiments were then analyzed, and conclusions were drawn based on empirical observations. An implementation of H.264 was used to identify the noise artifacts resulting from coding CGC at low rates. The results indicated that H.264 indeed performs sub-optimally when coding CGC. We learned that the reason for this was that the characteristics of CGC led to the signal being more compactly represented in the spatial domain than in the transform domain. We therefore proposed to omit the component transform and quantize the residual signal directly. This method, called residual scalar quantization (RSQ), was shown to outperform traditional H.264 coding for certain CGC in terms of quantified visual quality and bit rate. However, even when outperformed, the RSQ coder did not exhibit any of the noise artifacts present when coding with the traditional coder. We also introduced Rate-Distortion optimization, which allowed the coder to adaptively choose between traditional and RSQ coding, ensuring that each block is coded optimally, independent of the source content. This scheme was shown to outperform both stand-alone coders for all sample content. A quantizer with representation levels tailored specifically for the characteristics of CGC was also presented, and experiments showed that it outperformed uniform quantization when coding CGC. The results in this thesis were produced by simplified versions of the actual coders, and may not be completely accurate. However, the accumulated results indicate that RSQ may indeed outperform traditional H.264 coding for CGC. To confirm the theories that have been presented, the proposed techniques should be implemented in a full-scale implementation of H.264 and the experiments repeated.
37

Cognitive Radio and TV White Space Communications : TV White Space Geo-location Database System

Zurutuza, Naroa January 2011 (has links)
The aim of this thesis is to research in the use of emerging TV white space communications, implementing a geo-location database system. For that, some research and theoretical studies related to cognitive radio and TV white space communications will be done first, focusing on current activities, standarization processes, commercial approaches and related projects. Once the background and the present TV white space communications status is analyzed, a geolocation database system will be designed and developed to prove the potential of this technology. The operation of the database system will be demonstrated through a web interface. In this way, an open and publicly accessible geo-location database system implementation and structure will be created (note that even if several database system creation initiatives are taking place, most of them are private). However, due to the lack of official regulatories, established standards, and actual transmission data (data from TV broadcasters, wireless microphones etc.), only an initial TV white space database system demo will be implemented to model the operation of the same. It will be possible to access and query this database systemthrough a simple web interface for the Oslo area. After analyzing the results of the implementation and looking to other TV white space initiatives, some considerations for future work will be concluded.
38

Evaluating Vowel Pronunciation in Computer Assisted Pronunciation Training.

Erichsen, Stian January 2011 (has links)
Computer Assisted Pronunciation Training (CAPT) applications are tools that can be used when learning a second language. By evaluating the speech of a student, the CAPT system is able to give automatic feedback on his or her pronunciation performance.Two important properties in Norwegian pronunciation is the quantity and quality of the vowels. It is therefore important that students get to practice this.Feedback was produced by an ASR based CAPT system, where a speech recognizer evaluated the pronunciation produced by different speakers. However, sind ASR is prone to errors, verification was later performed to test the correctness of the recognizers results.The recognizer had an error-rate of 7.5 % when evaluating vowel quantity, and an error rate of 42.1 % when evaluating vowel quality. After verification, the first error rate was reduced to 1.35 % by rejecting 7.2 % of the results. The second error rate was reduced to 27,7 % by rejecting 23.5 of the results.The use of such a system could therefore be justified for evaluating the vowel quantity in the pronunciation, but not vowel quality.
39

Automatisk sporing av Dopplerspektrum / Tracing of cardiac Doppler spectrums

Vartdal, Gaute January 2011 (has links)
En rekke verdier i hjertet som man normalt finner ved invasiv undersøkelse med kateter, kan med stor nøyaktighet beregnes ut fra hastigher man finner ved Dopplerultralyd. Eksempelvis er trykket i hjertekammere og blodårer viktig informasjon i forbindelse med undersøkelser av hjertefunksjonen til en pasient. Ved å studere konturene av hastigheten til blodet ved gitte punkt i hjertet kan man med bruk av Bernoulli's forenklede ligning beregne disse verdiene uten å penetrere pasienten med kateter. Maksimal positiv og negativ trykkøkning(dP/dt) i ventrikkelen er eksempler på verdier man kan finne, og er to av de mest utbredte indikatorene på ventrikkelfunksjonen. Disse kan beregnes ut fra hastigheten til lekkasjer fra ventrikkelen til atrium, også kalt mitralinsuffisienser. Muligheten til å måle disse, samt andre verdier med Dopplerultralyd, har en enorm fordel fremfor bruk av kateter.Som oftest må sporingene av et Dopplerspektrum gjøres manuelt, en prosess som er tidkrevende og vanskelig. Denne masteroppgaven foreslår en algoritme som automatisk kan spore konturene av Dopplermålinger. Algoritmen er tilpasset Dopplerspektrum av mitralinsuffisienser, men fungerer generelt for alle typer spektrum. Algoritmen gjør også et forsøk på å håndtere delvis svake/manglende kanter i spektrumet. Resultatene er sammenlignet med manuelt sporede kanter, og viser at algoritmen med stor nøyaktighet kan beregne verdier som dP/dt og maksimal hastighet. Maksimal og minimal dP/dt kan beregnes med en gjennomsnittlig forskjell fra den manuelle sporingen på under 100 mmHg/s, og toppunktet med forskjell under 0.05 m/s. Resultatene viser at så lenge kvaliteten på Dopplermålingene er akseptabel, sporer algoritmen konturen nøyaktig, og fjerner effektivt støy og artifakter langs konturen. Forskjellen mellom automatisk og manuelt ervervet maksimal dP/dt har et standardavvik så lavt som 79 mmHg/s når spektrumene har god kvalitet.Også i spektrum hvor deler av signalet er svakt kan verdier som dP/dt predikeres av algoritmen, og når mindre enn 60% av et spektrum må predikeres kan maksimal og minimal dP/dt finnes veldig nøyaktig.
40

Economics in the Small and Independent Game Industry

Bergquist, Martin Svedal January 2011 (has links)
In order to look into the small and independent game industry this thesis presents an analysis of the business model of the game Minecraft using the ontology and framework defined in Osterwalders dissertation. The main pillars, product, customer interface, infrastructure management and financial aspects are explained in addition to various common revenue models used by game developers and publishers. The thesis further models the economy of the game and identifies its main cost accounts; storage, bandwidth, office rental, salaries, professional taxes, transactions and miscellaneous costs. Based on this, in addition to the revenues connected to the game, the thesis shows the most important success factors for the developer in regards to peer-production and value creation and shows the most important changes and suggestions for the future of the game.Furthermore, the thesis suggests important traits and effects for independent and small games based on the findings in the case study of Minecraft. Among these are free model effects and network effects in order to attract users and obtain high value networks at low costs and utilizing revenue models based on the value of the network through increased sales of value-added services, advertising and new market acquisition.

Page generated in 0.095 seconds