• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 10
  • Tagged with
  • 58
  • 58
  • 58
  • 26
  • 17
  • 15
  • 11
  • 11
  • 9
  • 8
  • 4
  • 4
  • 3
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Increasing Identity Governance when using OpenID : Hosting an OpenID Identity Provider on a smartphone

Stien, Eirik January 2011 (has links)
In the area of identity management OpenID is an identity system allowing users to log in to OpenID-enabled web sites by proving ownership of an OpenID Identifier by authenticating with its controlling OpenID Identity Provider. A user can choose to host an OpenID Identity Provider herself or trust in existing third-party providers such as Google. Technical skill is required for the former, leaving it unavailable for the average user.This thesis simplifies the matter by implementing an OpenID Identity Provider as a smartphone application, making use of the traditional server-like features inherent in such devices. New possiblities for authenticating the user arise as she is enabled to physically interact with the OpenID Identity Provider, which in the traditional scheme is performed through the web browser. As a result from these new possiblities, phishing attacks are claimed to be avoided and identity attributes are exempted from being controlled and possibly exploited by any third-party.One of several technical challenges include enabling the smartphone to receive inbound connections as this is required by the OpenID Authentication protocol, but restricted by telecom operators by default. Functionality must be in place to backup identity repositories stored on the smartphone in order not to lose possession of the established OpenID identities if the device becomes lost or damaged. Lastly, focus is given to make the solution easily applicable for even the novice consumer.
42

Privacy Policies for Location-Aware Social Network Services

Hjulstad, Ingrid January 2011 (has links)
The combination of location-awareness and social networks has introduced systems containing an increased amount of protection-worthy personal information, creating the need for improved privacy control from a user point of view.End-user privacy requirements were derived from identified end-user privacy preferences. These requirements were used to evaluate current Location-Aware Social Network Services' (LASNSs') end-user privacy control as well as help develop relevant enhancements.These requirements allows users to be able to control (if they wish) which of the objects related to them are accessed by whom, in what way and under which conditions. Two enhancement ideas which together helps fulfill this requirement have been presented. The few LASNSs offering the user access control rule specification only provides a small list of pre-defined subjects (e.g. "Friends", "Everyone"). This list is too limited for specification of many fine-grained privacy preferences. With a more extensive implementation of Role Based Access Control (RBAC) in LASNSs, with the user as the system administrator of roles, users will be able to create roles (e.g "colleague", "close friend", "family"), assign them to their connections, and specify these roles as subjects in access control rules. The user will also be allowed to specify conditions, under which subject(s)/role(s) can access an object. These conditions can be based on system attributes of the object owner (e.g location), the subject requesting access (e.g age) or external attributes (e.g time). A suitable user-friendly access control user interface has been proposed, showing how this can be presented in an effective and understandable way to the user. A few example user privacy preferences, each one representing one of the identified end-user privacy control requirements have been translated from data sent to the system through the proposed interface, into formal languages like Datalog and XACML.Current end-user privacy control can be improved, by making more fine-grained access control rule specification possible, through the proposed enhancements, suitable both from an end-user perspective and from a developer's point of view.
43

Privacy services for mobile devices

Bø, Solvår, Pedersen, Stian Rene January 2011 (has links)
Recent studies have shown that privacy on mobile devices is not properly ensured. Due to a heavy increase of smartphones in the market, in addition to a variety of third-party applications, a demand for improved solutions concerning privacy has arisen. Our objective is to extend users' ability to control applications' access to resources at run-time. We investigate whether such a solution is adequate or not, in order to properly maintain privacy. We propose a design that provides a higher degree of control by allowing users to set preferences that determines what personal information to share. Previous efforts only give users a binary choice on whether to fake personal information or not. We offer a more flexible solution that allows users to set preferences with a higher degree of granularity. We implement selected parts from our design, in order to evaluate whether this solution serves as a utility or not. Further evaluation is a necessity in order to fully accept or reject the idea. However, our initial results are promising.
44

Advanced Electronic Signature

Azizi, Fazel Ahmad January 2011 (has links)
DiFi, Altinn and Lånekassen will implement a national digital signature to sign document submissions and mutual agreements. It is anticipated that a pilot will be launched in 2012.A digital signature is very different to a hand signature, for instance how to establish what you actually sign. Moreover, the verification of a digital signature requires a correct and valid public key, whereas a handwritten signature is physically produced by a person.The candidate of this project will try to understand the signature applications of Altinn and Lånekassen, then analyze the proposed digital signature architecture and standards to be used in the DiFi pilot and assess the utility and security of this solution compared to the existing Altinn "login signature".Furthermore, the candidate will try to identify one or more parts of the architecture that can be given an alternative solution, and state the arguments that support that this will be an improvement. If time allows, experimental results in software that support the claims may be carried out.
45

Real-Time Simulation of Reduced Frequency Selectivity and Loudness Recruitment Using Level Dependent Gammachirp Filters

Bertheussen, Gaute January 2012 (has links)
A real-time system for simulating reduced frequency selectivity and loudness recruitment was implemented in the C programming language. The finished system is an executable program where a user can input a sound file and a list of hearing losses. As the program runs, a processed version of the input signal is played back. The processed signal includes the effects of either one or both the hearing impairments. The system, called a hearing loss simulator, is based on the dynamic compressive gammachirp filter bank. Each channel in the filter bank is signal dependent, meaning the filter characteristics are changed according to an estimate of the signal level. Reduced frequency selectivity was simulated by influencing the filter characteristics by a hearing loss value in addition to the signal level. This produced masking effects, and was able reduced the detail of spectral envelopes. Loudness recruitment was simulated by scaling each sample based on the signal level. This technique accounted for abnormal growth of loudness-level and elevated absolute thresholds. It made low sounds disappear while leaving loud sounds closer to their original level.
46

End-User service composition framework and application

Kulstad, Rune Bleken January 2012 (has links)
In today’s public market Mobile phones has become a part of everyday life. The introduc-tion of Smart Phones has created a new market for services and applications for the Smart Phones. Many of these users would benefit of customizing their own services to fulfil their needs. This can be achieved with end-user service composition. End-user service composition enables the user to compose their own services from already existing components to provide val-ue added services. In this Master Thesis a service composition tool consisting of the two applica-tions Easy Composer and EasyDroid is presented. The idea of the tool is that ordinary people without technical background will be able to quickly compose their own services in a simple manner. The existing tool has been in development for a while, but still lack some sufficient quality in terms of usability and utility for ordinary people to make use of it. Utility means what the tool can be utilized to, and usability means the user-friendliness and usefulness. In this Mas-ter Thesis a new system has been made for the service composition tool. The Easy Composer application has been discarded and new web based GUI has replaced its functionality. In addi-tion, the EasyDroid application has been remade and a new server side has been developed. Fur-thermore, the communication between the different parts has been improved. The usability and utility of the previous system has been considerably improved in the new system. In other words the existing functionality has been made more user friendly and new functionality has been added to the tool. The goal is that the service composition tool would have the sufficient quality and novelty for ordinary users to embrace it.
47

MCTF and JPEG 2000 Based Wavelet Video Coding Compared to the Future HEVC Standard

Erlid, Frøy Brede Tureson January 2012 (has links)
Video and multimedia content has over the years become an important part of our everyday life. At the same time, the technology available to consumers has become more and more advanced. These technologies, such as streaming services and advanced displays, has enabled us to watch video content on a large variety of devices, from small, battery powered mobile phones to large TV-sets.Streaming of video over the Internet is a technology that is getting increasingly popular. As bandwidth is a limited resource, efficient compression techniques are clearly needed. The wide variety of devices capable of streaming and displaying video suggest a need for scalable video coders, as different devices might support different sets of resolutions and frame rates.As a response to the demands for efficient coding standards, VCEG and MPEG are jointly developing an emerging video compression standard called High Efficiency Video Coding (HEVC). The goal for this standard is to improve the coding efficiency as compared to H.264, without affecting image quality. A scalable video coding extension to HEVC is also planned to be developed.HEVC is based on the classic hybrid coding approach. This however, is not the only way to compress video, and attention is given to wavelet coders in the literature. JPEG 2000 is a wavelet image coder that offers spatial and quality scalability. Combining JPEG 2000 with Motion Compensated Temporal Filtering (MCTF) gives a wavelet video coder which offers both temporal, spatial and quality scalability, without the need for complex extensions.In this thesis, a wavelet video coder based on the combination of MCTF and JPEG 2000 was implemented. This coder was compared to HEVC by performing objective and subjective assessments, with the use case being streaming of video with a typical consumer broadband connection. The objective assessment showed that HEVC was the superior system in terms of both PSNR and SSIM. The subjective assessment revealed that observers preferred the distortion produced by HEVC over the proposed system. However, the results also indicated that improvements to the proposed system can be made that could possibly enhance the objective and subjective quality. In addition, indications were also found that suggest that a use case operating with higher bit rates is more suitable for the proposed system.
48

Subjective and Objective Crosstalk Assessment Methodologies for Auto-stereoscopic Displays

Skildheim, Kim Daniel January 2012 (has links)
Stereoscopic perception is achievable when the observer sees a scene from a slightly different angle. Auto-stereoscopic displays utilize several separate views to achieve this without using any special glasses. Crosstalk is an undesired effect of separating views. It is one of the most annoying artefacts occurring in an auto-stereoscopic display. This experiment has two parts. The first part proposes a subjective assessment methodology for characterizing crosstalk in an auto-stereoscopic display, without restriction of subjects’ viewing behaviour. The intention was to create an inexpensive method. The measurement was performed by using a Kinect prime sensor as a head tracking system combined with subjective score evaluation to get a data plot of the perceived crosstalk. The crosstalk varies in line with image content, disparity and viewing position. The result is a data plot that approaches a periodically pattern, which is consistent with the characteristics of an auto-stereoscopic display. The result is not perfect since there are many sources of errors occurring. These errors can be improved with better head tracking, an improved movement system, post processing of data, more data and removal of outliers.The second part proposes methods for extracting subjective values based on interpolated plots and creating objective crosstalk influenced pictures which correlate with the subjective data. The best extraction method was to combine an adapted sine regression curve with a linear interpolation. This interpolation followed the subjective values in a parallel slice plot at 3.592 m from the screen. The interpolation was adapted to fit a derived model as best as possible to achieve a good correlation. Objective crosstalk pictures were created, where the amount of crosstalk was determined by the neighbouring view that influenced the current view the most. The correlation was based on the relationship between the SSIM value from the created crosstalk picture and the extracted subjective value. The total correlation of the pictures together were 0,8249, where the picture with the highest correlation had 0,9561. This method is quite good for pictures that have a maximum disparity grade below 38 pixels. The overall result is good and it is also a measure of quality for the subjective test. This result can be improved by increasing the complexity of how the objective crosstalk pictures are created by adding more views into account or try another method to create crosstalk. Improved extraction of subjective values will also be beneficial in terms of improving the correlation even more.
49

Evaluating QoS and QoE Dimensions in Adaptive Video Streaming

Stensen, Julianne M. G. January 2012 (has links)
The focus of this thesis has been on Quality of Service (QoS) and Qualityof Experience (QoE) dimensions of adaptive video streaming. By carryingout a literature study reviewing the state of the art on QoS andQoE we have proposed several quality metrics applicable to adaptivevideo streaming, amongst them: initial buffering time, mean duration of arebuffering event, rebuffering frequency, quality transitions and bitrate. Perhapscounterintuitively, other research has found that a higher bitratedoes not always lead to a higher degree of QoE. If one look at bitrate inrelation to quality transitions it has been found that users could prefer astable video stream, with fewer quality transitions, at the cost of an overallhigher bitrate. We have conducted two case studies to see if this isconsidered by today’s adaptive video streaming technologies. The casestudies have been performed by means of measurements on the playersof Tv2 Sumo and Comoyo. We have exposed the players to packet lossand observed their behavior by using tools such as Wireshark. Our resultsindicate that neither player take the cost of quality transitions intoaccount in their rate adaptation logic, the players rather strive for a higherquality level. In both cases we have observed a relatively large numberof quality transitions throughout the various sessions. If we were to giveany recommendations to the Over-the-Top (OTT) service providers, wewould advise them to investigate the effects of quality transitions andconsider including a solution for handling potentially negative effects inthe rate adaptation logic of the player.
50

iVector Based Language Recognition

Tokheim, Åsmund Einar Haugland January 2012 (has links)
The focus of this thesis is an fairly new approach to phonotactic language recognition, i.e. identifying a language from the sounds in an spoken utterance, known as iVector subspace modeling. The goal of the iVector is to compactly represent the discriminative information in a utterance so that further processing of the utterance is less computationally intensive. This might enable the system to be trained with more data, and thereby reach an higher performance. We present both the theory behind iVectors and experiments to better fit the iVector space to our development data. The final system got comparable result to our baseline PRLM system on the NIST LRE03 30 second evaluation set.

Page generated in 0.0797 seconds