801 |
Distributed NetFlow Processing Using the Map-Reduce ModelMorken, Jan Tore January 2010 (has links)
<p>We study the viability of using the map-reduce model and frameworks for NetFlow data processing. The map-reduce model is an approach to distributed processing that simplifies implementation work, and it can also help in adding fault tolerance to large processing jobs. We design and implement two prototypes of a NetFlow processing tool. One prototype is based on a design where we freely choose an approach that we consider optimal with regard to performance. This prototype functions as a reference design. The other prototype is based on and makes use of the supporting features of a map-reduce framework. The performance of both prototypes is benchmarked, and we evaluate the performance of the framework based prototype against the reference design. Based on the benchmarks we analyse and comment the differences in performance, and make a conclusion about the suitability of the map-reduce model and frameworks for the problem at hand.</p>
|
802 |
Støyreduksjon av hyperspektrale bilder / Noise reduction of hyperspectral imagesFjerdingen, Sverre January 2010 (has links)
<p>Støyreduksjon har blitt utført på hyperspektrale bilder i både spektral retning og romlige retninger. Algoritmene som har blitt benyttet for å oppnå støyreduksjon er Principal Component Analysis (PCA), Maximum Noise Fraction (MNF) og wavelet-transform. MNF-algoritmen har blitt kjørt med mange forskjellige støyestimatorer for å bestemme hvilke av disse som gir høye signal-støy-forhold. Å utføre støyestimatet i fourierrommet har også blitt undersøkt. Dette ga gode resultater når man benyttet fasedifferansen med nærliggende piksler som estimator for hyperspektrale bilder tatt under hvitt lys. Ble derimot kilden endret til en 355nm laserkilde fikk man langt dårligere resultater. Det er bare Haar-transformen som har blitt brukt til wavelet-transformasjon. Haar-transformen ga dårlig støydempning i både spektral retning og romlige retninger. Algoritmene PCA og MNF fungerer bra til støyreduksjon. I spektral retning er det liten forskjell mellom PCA og de ulike støyestimatene som er brukt under MNF. Ser man derimot på det romlige planet finner en større forskjeller mellom dem. Dette gjelder spesielt for spektralbånd med lav intensitet og mye støy. Her gir PCA bedre støydempning enn MNF. Støyreduksjonen ved PCA og MNF kommer som en direkte følge av å begrense antallet prinsipalkomponenter under tilbaketransformasjonen. Hvor grensen bør settes for hvilke prinsipalkomponenter som skal bevares, ble også vurdert. Når lyskildeforholdene for de hyperspektrale bildene blir sammenlignet er grensen valgt slik at 99,25% av det opprinnelige signalet blir bevart. Spekteret til hyperspektrale bilder tatt under hvitt lys har høy intensitet for lange bølgelengder, og lav intensitet ved korte bølgelengder. Endres derimot lyskilden til en 355nm laserkilde får man lav intensitet for lange bølgelengder og høy intensitet for korte.</p>
|
803 |
"World of NTNU" : - et seriøst spill for rekrutterings og forskningshensikter / "World of NTNU" : A serious game for both university recruitment and research platformLomeland, Janette Haugland January 2010 (has links)
<p>Dette prosjektet startet som en videreutvikling av Håvard Richvold sin masteroppgave hvor han utviklet et seriøst spill ("serious game") for rekruttering på NTNU. Prosjektoppgaven startet med å designe et MMO, "massive multiplayer online" spill i en virtuell verden. En virtuell verden består av følgende essensielle elementer: konsept av tilstedeværelse, vedvarende miljø og gjenstander og interaksjon med andre individer gjennom deres avatarer. Det som gjør at en virtuell verden blir et spill, altså et MMO spill er inkludering av gameplay- elementer, mål og nivåer. Prosjektoppgaven resulterte i et game design dokument (GDD) som viser konseptet av spillet og elementene i spillet. Dette er et rekrutteringsspill for NTNU og er derfor et seriøst spill, altså et spill som er laget for å gi spilleren informasjon og kunnskap om universitetet, og ikke designet bare for underholdning. Det som gjør dette spillkonseptet unikt er at spilleren skaffer seg kunnskap ved å utføre craftingoppgaver som viser teorien til ulike studieretninger i praksis, som danner et gameplay som kan interessere både potensielle studenter og eksisterende studenter ved NTNU. Ved å interessere begge disse to målgruppene vil det gi spillet et fellesskap som gir "co- creation" til spillet, altså spillerne kan skaffe seg informasjon gjennom hverandre. I masteroppgaven har en jobbet med å lage et konseptbevis på spillmekanismen, med bruk av Flash for å demonstrere hvordan crafting oppgavene kan se ut og fungere. For å kunne teste konseptet har det blitt utført spilltester med en liten referansegruppe og foretatt en spørreundersøkelse for å kartlegge hva slags holdninger studenter på NTNU har til ideen. Å utføre tester er en viktig del av spillutvikling, både for å teste konseptet men også for å finne bugs. Testing gjør at utviklingen av spillet får spilleren i fokus, som er noe av kjernen bak et spill med suksess, samtidig som at spilleren føler kjærligheten utviklerne har lagt i spillet, for at det skal bli en bra brukeropplevelse. Det har blitt opprettet en gruppe og utviklingen til selve MMO spillet har startet. Med bruk av plattformen til Parallel World Labs (PWL) som de brukte for å lage den virtuelle verdenen, Virtuelle Rockheim og verdenen av hovedbygget på Gløshaugen som Håvard Richvold laget, har det blitt skapt en virtuell verden som flere personer kan være inni samtidig. Denne sammenkoblingen er starten på spillet World of NTNU (WON), som lages for rekrutteringshensikter og for å fungere som en forskningsplattform.</p>
|
804 |
Video Quality Assessment in BroadcastingPrytz, Anders January 2010 (has links)
<p>In broadcasting, the assessment of video quality is mostly done by a group of highly experienced people. This is a time consuming task and demands lot of resources. In this thesis the goal is to investigate the possibility to assess perceived video quality with the use of objective quality assessment methods. The work is done in collaboration with Telenor Satellite Broadcasting AS, to improve their quality verification process from a broadcasting perspective. The material used is from the SVT Fairytale tape and a tape from the Norwegian cup final in football 2009. All material is in the native resolution of 1080i and is encoded in the H.264/AVC format. All chosen compression settings are more or less used in daily broadcasting. A subjective video quality assessment been carried out to create a comparison basis of perceived quality. The subjective assessment sessions carried out by following ITU recommendations. Telenor SBc provided a video quality analysing system, the Video Clarity Clearview system that contains the objective PSNR, DMOS and JND. DMOS and JND are two pseudo-subjective assessment methods that use objective methods mapped to subjective results. The methods hopefully predict the perceived quality and eases quality assessment in broadcasting. The correlation between the subjective and objective results is tested with linear, exponential and polynomial fitting functions. The correlation for the different methods did not achieve a result that proved use of objective methods to assess perceived quality, independent of content. The best correlation result is 0.75 for the objective DMOS method. The analysis shows that there are possible dependencies in the relationship between subjective and objective results. By measuring spatial and temporal information possible dependent correlation results are investigated. The results for dependent relationships between subjective and objective results are good. There are some indications that the two pseudo-subjective methods, JND and DMOS, can be used to assess perceived video quality. This applies when the mapping functions are dependent on spatial and temporal information of the reference sequences. The correlation achieved for dependent fitting functions, that has a suitable progression, are in the range 0.9 -- 0.98. In the subjective tests, the subjects used were non-experts in quality evaluation. Some of the results indicate that subjects might have a problem with assessing sequences with high spatial information. This thesis creates a basis for further research on the use of objective methods to assess the perceived quality.</p>
|
805 |
Sharing in Collaborative Systems : A Set of Patterns for Information Sharing between Co-located UsersAasgaard, Terje Svenkerud, Skjerdal, Åsmund January 2010 (has links)
<p>Technological advances, specifically within the field of mobile and ubiquitous technologies, hold the promise to support collaboration in work and educational environments in new ways. Within collaborative systems, it is possible to use ubiquitous technology to provide users with services to interact - for instance share information - with other users in a given environment. Over the course of this project, the authors have created a set of design principles for co-located information sharing in collaborative systems, using a structured method called patterns. The aim of these patterns is to provide support for designers and developers of collaborative systems to take advantage of mobile and ubiquitous technology when designing and implementing support for co-located sharing. The patterns were based on a set of re-occurring problems identified as important for co-located information sharing between users. These problems were identified by performing a review of relevant literature, research and existing solutions on the subject. An initial set of patterns were created based on this review. The patterns themselves are written on an abstraction level that targets the human-computer interaction part of sharing information between co-located users. The patterns where then evaluated by three experts within system engineering and collaborative systems, in an iterative process. The overall aim of these evaluations were to ensure that the patterns were easy to understand, and that they provided the information that was relevant for the problem and the domain, in order to be useful in the development process of collaborative systems. The result of these evaluations culminated in a final set of patterns for co-located information sharing. These patterns describe guidelines for: (1) How users can specify the information they wish to share and the receiver(s) of that information, (2) how users can be aware of the potential for collaboration, (3) how situated displays can be used to share information, (4) how user privacy should be protected and (5) how information should be available when the user needs it. The final set of patterns is given in chapter 6 of the thesis.</p>
|
806 |
Voice Transformation based on Gaussian mixture modelsGundersen, Terje January 2010 (has links)
<p>In this thesis, a probabilistic model for transforming a voice to sound like another specific voice is tested. The model is fully automatic and only requires some 100 training sentences from both speakers with the same acoustic content. The classical source-filter decomposition allows prosodic and spectral transformation to be performed independently. The transformations are based on a Gaussian mixture model and a transformation function suggested by Y. Stylianou. Feature vectors of the same content from the source and target speaker, aligned in time by dynamic time warping, are fitted to a GMM. The short time spectra, represented as cepstral coefficients and derived from LPC, and the pitch periods, represented as fundamental frequency estimated from the RAPT algorithm, are transformed with the same probabilistic transformation function. Several techniques of spectrum and pitch transformation were assessed in addition to some novel smoothing techniques of the fundamental frequency contour. The pitch transform was implemented on the excitation signal from the inverse LP filtering by time domain PSOLA. The transformed spectrum parameters were used in the synthesis filter with the transformed excitation as input to yield the transformed voice. A listening test was performed with the best setup from objective tests and the results indicate that it is possible to recognise the transformed voice as the target speaker with a 72 % probability. However, the synthesised voice was affected by a muffling effect due to incorrect frequency transformation and the prosody sounded somewhat robotic.</p>
|
807 |
Modulation Methods for Neutral-Point-Clamped Three-Level InverterFloten, Sveinung, Haug, Tor Stian January 2010 (has links)
<p>Multilevel converters have seen an increasing popularity in the last years for medium- and high-voltage applications. The most popular has been the three-level neutral clamped converter and still research is going on to improve the control of it. This master thesis was a continuation of the specialization project fall 2009. The main topics of current thesis were to further investigate the DC-bus balancing issues, compare symmetrical (one sampling per triangular wave) and asymmetrical (sampling at the top and bottom of the triangular wave) modulation, derive current equations for Space Vector and Double-Signal, improve output voltage in overmodulation and be able to DC-bus balance, and to implement the methods in the laboratory. Models of the three-level converter were made in the specialization project in both PSCAD and SIMULINK and further studies of the DC-bus balance were also made in this master thesis. None of the methods showed problems to regulate the DC-bus voltage when there was different capacitor values and unsymmetrical load. A PI controller was introduced for Space Vector but it did not show better performance than a regular P regulator. Asymmetrical modulation showed a clearly better performance than symmetrical modulation when the switching frequency was low compared to the fundamental frequency, especially for Space Vector. The 1st harmonic line-to-line voltage was closer to the wanted value and the THDi was significantly lower. Simulations also showed that the THDi can vary significantly depending on at which angle the first sampling is done. This is most clear for asymmetrical Space Vector modulation, but also for the other cases this pattern occurs. By implementing an overmodulation algorithm the amplitude of the 1st harmonic output voltage was closer to what was desired. Simulations showed how important it was to have three phase sampling symmetry in overmodulation. By having a wrong switching frequency the line-to-line output voltage dropped down to 2.06 when operating in six-step, when the wanted output value should be 2.205. Hence there is a quite large mismatch and the converter is sensitive to the switching frequency when it is operating in the higher modulation area. The balancing algorithm introduced for overmodulation is able to remove an initial offset without a notable change the 1st harmonic output. Both Space Vector and Double-Signal were tested in the laboratory with two separated DC-sources. Asymmetrical and Symmetrical modulation were tested and so was also overmodulation. The laboratory results confirmed the simulated results, but since the switching was not synchronized in the laboratory, some errors occurred.</p>
|
808 |
Detecting Identity Thefts In Open 802.11e Enabled Wireless NetworksHolgernes, Eirik January 2010 (has links)
Open wireless networks are commonly deployed as a result of easy access, user-friendliness, as well as easy deployment and maintance. These networks do not implement strong security features, and clients are prone to a myriad of possible attacks. Identity attacks are considered one of the most severe, and as a result of this Intrusion Detection Systems (IDS) can be deployed.With the introduction of 802.11e/Quality-of-Service on a link-to-link basis in 802.11 networks, most IDS will become obsolete as they often rely on a detection technique known as MAC Sequence Counting Analysis. This specific technique will become useless if 802.11e/QoS is enabled on the network. In this thesis I have analyzed the problem further, and suggest new techniques, both implemented and verified as an IDS, as well as analytic theories in order to enhance MAC Sequence Counting Analysis to cope with the new features of 802.11e. There has been related work on the same issue, but this thesis questions their use of unreliable physical parameters in order to detect attacks. As we will see, my new proposed techniques rely on analysis of the 802.11 standard and the 802.11e amendent, and are not dependent on parameters which could be unreliable in urban and mobile environments.Experiments and analysis will demonstrate the validity of the new suggested techniques, and the outcome of the thesis will divided into two parts; Development of an optimized Intrusion Detection System and an enhanced algorithm in order to detect attacks which exploits the new features of 802.11e.
|
809 |
Sustainable Smart House Technology Business Models : An Assessment of Rebound EffectsRød-Knudsen, Line January 2010 (has links)
Smart House Technologies have in earlier research been put forward as an effective measure to reduce CO2-emissions. A rebound effect is defined as the change in energy demand caused by changes in consumer behavior. This behavior occurs when energy efficient technology such as Smart House Technology is introduced. Energy efficient technology has been found to stimulate increased usage through the appearance of new services and usage areas. This leads to an increase in overall energy demand, causing a take-back in energy efficiency called a rebound effect. This thesis has conducted an analysis on potential rebound effects from Smart House Technologies. The analysis is based on a constructed case study and interviews with users. This research shows that people tend to react positively to feedback on energy consumption. By incorporating this into business models wanted rebound effects towards increased environmental awareness by the users can be enhanced. Another finding was that measures should be taken by Smart House Technology providers to prevent direct cost-savings to reach the users and cause unwanted rebound effects from wealth maximization. The results have been discussed at micro- and macro-economic levels. A suggestion for a sustainable business model at the micro-level is presented. The suggested business model is an environmentally friendly bonus point scheme. The scheme can be utilized to capture the free cash from energy savings achieved with Smart House Technology. Customer relationships, partner networks and revenue models were identified as the three elements in Osterwalder's business model ontology that considerably influence rebound effects arising from the use of Smart House Technologies.
|
810 |
A Survey of Modern Electronic Voting TechnologiesStenbro, Martine January 2010 (has links)
The last decade, electronic voting has evolved from being a mean of counting votes to also offer the possibility of electronically casting votes. From recording votes using punch cards and optical scan systems, electronic voting has evolved into the use of direct-recording-electronic machines. Voting over the Internet has also become a hot research topic, and some implementation and testing have been done. Internet voting systems are significantly more vulnerable to threats from external attackers, than systems to cast ballots in controlled environments. Mechanisms to provide security, accuracy and verification are critical, and issues with coercion and usability also arise.In the first part of this thesis we give a theoretical study about existing electronic voting techniques, as well as requirements and security issues of modern electronic voting systems. We also give a brief background theory of some cryptographic mechanisms and systems. Secondly, we present two modern voting solutions in development. We have included security functionalities provided by the system, the cryptographic techniques used and some threats and attacks to the systems. These systems can be exposed to compromised computers, ballot stuffing, and corrupt infrastructure players, but are using cryptographic proofs to ensure accuracy and counter attacks.In the third part, we create a procedure and perform a usability test on one of these modern voting solutions. Our findings emphasize the fact that there is a tension between verifiable elections and usability. The voters have trust in the privacy and accuracy of such a voting systems if more guidance to utilize the means of verification is included, and a trusted third party verifies the system security. The advantages of electronic voting outweigh the risks. Internet voting is a term of further discussion and testing, but considering coercion and the insecure aspects of the medium, Internet voting will never be 100% safe. It is a question of trade off between the advantages and threats.
|
Page generated in 0.0271 seconds