Spelling suggestions: "subject:"[een] METRICS"" "subject:"[enn] METRICS""
411 |
Performance Issues of Routing Protocols in MANETTahir, Saleem January 2010 (has links)
A mobile ad-hoc network is an assortment of wireless mobile hosts, which establishes a momentary network without any assist of centralized administrator. The characteristics of an ad-hoc network can be explored on the base of routing protocols. The dynamic topology is the vital characteristic in which nodes frequently change their position. In the ad-hoc networks, there are mobile nodes such as Personal Digital Assistance (PDA), smart phone and laptops; they have limited operational resources like battery power and bandwidth. The control traffic is to be minimized, which is the main responsibility of routing protocols by selecting the shortest path and controlling the traffic. In this study work, we focus on performance issues of routing protocols, Optimized Link State Routing (OLSR), Ad Hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Temporally Ordered Routing Algorithm (TORA) in mobility and standalone ad-hoc networks. For this purpose, we first study and explain these protocols and then we use the Optimized Network Engineering Tool (OPNET) modeler tool and analyze the performance metrics delay, throughput and network load.
|
412 |
Towards Measurable and Tunable SecurityLundin, Reine January 2007 (has links)
Many security services today only provides one security configuration at run-time, and cannot then utilize the trade-off between performance and security. In order to make use of this trade-off, tunable security services providing several security configurations that can be selected at run-time are needed. To be able to make intelligent choices on which security configuration to use for different situations we need to know how good they are, i.e., we need to order the different security configurations with respect to each security attribute using measures for both security and performance. However, a key issue with computer security is that it is due to its complex nature hard to measure. As the title of this thesis indicates, it discusses both security measures and tunable security services. Thus, it can be seen to consist of two parts. In the first part, discussing security measures for tunable security services, an investigation on the security implications of selective encryption by using guesswork as a security measure is made. Built on this an investigation of the relationship between guesswork and entropy. The result shows that guesswork, after a minor redefinition, is equal to the sum of the entropy and the relative entropy. The second part contributes to the area of tunable security services, e.g., services that provides several security configurations at run-time. In particular, we present the mobile Crowds (mCrowds) system, an anonymity technology for the mobile Internet developed at Karlstad University, and a tunable encryption service, that is based on a selective encryption paradigm and designed as a middleware. Finally, an investigation of the tunable features provided by Mix-Nets and Crowds are done, using a conceptual model for tunable security services.
|
413 |
Towards Inter-temporal Privacy MetricsBerthold, Stefan January 2011 (has links)
Informational privacy of individuals has significantly gained importance after information technology has become widely deployed. Data, once digitalised, can be copied and distributed at negligible costs. This has dramatic consequences for individuals that leave traces in form of personal data whenever they interact with information technology. The right of individuals for informational privacy, in particular to control the flow and use of their personal data, is easily undermined by those controlling the information technology. The objective of this thesis is the measurement of informational privacy with a particular focus on scenarios where an individual discloses personal data to a second party, the data controller, which uses this data for re-identifying the individual within a set of others, the population. Several instances of this scenario are discussed in the appended papers, most notably one which adds a time dimension to the scenario for modelling the effects of the time passed between data disclosure and usage. This extended scenario leads to a new framework for inter-temporal privacy metrics. The common dilemma of all privacy metrics is their dependence on the information available to the data controller. The same information may or may not be available to the individual and, as a consequence, the individual may be misguided in his decisions due to his limited access to the data controller’s information when using privacy metrics. The goal of this thesis is thus not only the specification of new privacy metrics, but also the contribution of ideas for mitigating this dilemma. However a solution will rather be a combination of technological, economical and legal means than a purely technical solution.
|
414 |
Code Profiling : Static Code AnalysisBorchert, Thomas January 2008 (has links)
Capturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality. In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses. The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.
|
415 |
Video Quality Metric improvement using motion and spatial maskingNäkne, Henrik January 2016 (has links)
Objective video quality assessment is of great importance in video compression and other video processing applications. In today's encoders Peak Signal to Noise Ratio or Sum of Absolute Differences are often used, though these metrics have limited correlation to perceived quality. In this paper other block-based quality measures are evaluated with superior performance on compression distortion when evaluating correlation with Mean Opinion Scores. The major results are that Block-based Visual Information Fidelity with optical flow and intra-frame Gaussian weighting outperforms PSRN, VIF, and SSIM. Also, a block-based weighted Mean Squared Error method is proposed that performs better than PSRN and SSIM, however not VIF and BB-VIF, with the advantage of high locality, which is useful in video encoding. The previously mentioned weighting methods have not been evaluated with SSIM, which is proposed for further studies.
|
416 |
Evaluating sustainable supply chain management : Using the Triple Top Line to evaluate sustainability in the textile industry.Goodman, Andrew January 2018 (has links)
Purpose: The purpose of this research is to explore how CSR reporting of supply chain management indicators and metrics in the textile and apparel industry relate to the Triple Top Line framework. Design/methodology/approach: This thesis is conducted by doing a content analysis of sustainable supply chain management performance metrics and indicators in a first step which is then matched within the conceptual framework of the Triple Top Line. Finally, a content analysis of branded marketers’ and retailers’ corporate social responsibility reports is conducted using the conceptual framework as a guideline. Findings: The results showed that certain segments of the conceptual framework were under represented in the terms of sustainable supply chain management performance indicators and metrics, and that the reporting of the metrics and indicators is still lacking as a whole within the textile and apparel industry. Research limitations/implications: The limitations of the thesis are that the analysis of the CSR reports was conducted through a manifest content analysis and could be improved by using a latent approach and whilst the CSR reports of branded manufacturers and retailers were analysed, the researcher could have included the analysis of fabric and fibre producers to have an even more complete vision of the industry. Originality/value: The value of this thesis is that it offers academics and practitioners a new conceptual framework to evaluate their CSR reporting and measuring of sustainable supply chain management indicators and performance metrics.
|
417 |
Comparison of Video Quality Assessment MethodsJung, Agata January 2017 (has links)
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
|
418 |
Measuring in an Agile System Development Process : A Case Study in Health ITJohansson, Felix, Uppugunduri, Samir January 2017 (has links)
The basic aim for any software development organization is to maximize value creation for any given investment. To amplify and speed up value creation efforts, Agile Software Development has gained much popularity during the last decade as a response to a volatile and disruptive market. In an Agile environment, the team focuses on lightweight working practices, constant deliveries and customer collaboration rather than heavy documentation and inflexible processes. However, the Agile way of working has complicated how an organization can control and evaluate the process; allowing organizations to believe that all Agile processes is the ideal process. This master thesis was conducted as a case study at Sectra ImIT, an Agile Health IT company working with Imaging and IT solutions that is currently in an early phase of introducing metrics in the System Development process. The purpose of this thesis was to investigate and suggest how the organization could use metrics to control and evaluate value creation in the System Development process. It also aimed to provide strategic recommendations to such an organization how they could continue their work with implementing and using metrics. The descriptive and exploratory purpose of this study was realized through unstructured and semistructured interviews with people involved in the process as well as observations. One of the major findings in this thesis is related to a missing feedback loop from defects occurring at customer site to the internal System Development process. Therefore, this study developed and implemented a concept to generate this feedback. The concept builds on defect information that can be used both to generate feedback and statistics for evaluation. The second major finding in this study is related to the identification of barriers to why the organization is not using metrics in teams to control and evaluate the process. Based on these findings, the authors presented several recommendations that should be considered to create a culture where teams are using metrics to learn more about the process. The first recommendation is that the organization should set guidelines among teams of what should, and is desired to be evaluated with focus on information need. Secondly, metrics need to be higher prioritized through directives from management granting team’s resources to manage metrics, which at the same time provides incentives that the organization believe metrics could improve their work. Thirdly, based on the company context, teams should identify metrics based on an information need derived from their prioritizations, changes, decisions and what is currently left unanswered. Finally, metrics should primarily be used to provide means for discussion and provide feedback with focus on trends rather than absolute numbers. / Det huvudsakliga målet för ett godtyckligt företag inom mjukvaruutveckling är att maximera det värde som skapas i varje enskild investering. För att förstärka och snabba upp värdeskapande har Agil mjukvaruutveckling växt i popularitet som en respons mot volatila och osäkra marknader. I en Agil miljö fokuserar grupper på ”lättviktade” arbetsmetoder, kontinuerliga leveranser och kundsamarbeten över det tidigare arbetssättet som bestod av mycket dokumentation och inflexibla processer. Samtidigt har det Agila arbetssättet gjort det svårt för organisationer att kontrollera och utvärdera processen, vilket har resulterat i att organisationer antar att deras Agila process är ideal och välfungerande med avsaknad av belägg för det. Detta examensarbete genomfördes som en fallstudie på Sectra ImIT, ett Agilt företag inom medicinsk teknik med fokus på bildhanteringssystem och IT lösningar. Företaget är i en tidig fas av att undersöka och introducera mätetal i systemutvecklingsprocessen, där syftet med examensarbetet var att utvärdera och föreslå hur organisationen kunde använda mätetal för att kontrollera och utvärdera värdeskapande i processen. Därtill ämnade studien även att ge strategiska förslag på hur företaget i framtiden kan arbeta med att implementera och använda sig av mätetal. Det deskriptiva och explorativa syftet realiserades genom ostrukturerade och semi-strukturerade intervjuer samt observationer med människor som dagligen arbetade inom processen. En iakttagelse var kopplad till avsaknaden av återkoppling mellan de defekter som uppkommer ute hos kund tillbaka till systemutvecklingsprocessen. Detta resulterade i ett koncept som utvecklades och implementerades av författarna med syfte att skapa återkoppling och möjlighet till statistisk utvärdering av processen som helhet. Den andra iakttagelsen berörde ett antal anledningar till varför organisationen inte använder mätetal i teams för att kontrollera och utvärdera processen. Baserat på en analys av dessa presenterar författarna flertalet rekommendationer som företaget borde ta hänsyn till för att skapa en kultur som främjar användandet av mätetal för att skapa ytterligare förståelse för processen. Den första rekommendationen är att organisationen bör diskutera riktlinjer gemensamt för teams gällande vad som anses önskvärt att utvärdera, med fokus på informationsbehov. Därtill bör organisationen uppmana teams till att allokera mer resurser på mätetal, vilket samtidigt ger incitament att det är något företaget tror kan hjälpa teams att bli bättre. Utifrån företagets kontext bör teams själva tillåtas att identifiera mätetal baserat på deras informationsbehov som ett resultat av exempelvis prioriteringar, förändringar, beslut och vad som för tillfället är obesvarat. Slutligen ska mätetal användas i huvudsak som en grund för diskussion och feedback med fokus på trender snarare än att uppnå specifika mål.
|
419 |
Validating the User-Centered Hybrid Assessment Tool (User-CHAT): a comparative usability evaluationElgin, Peter D. January 1900 (has links)
Doctor of Philosophy / Department of Psychology / John J. Uhlarik / Usability practitioners need effective usability assessment techniques in order to facilitate development of usable consumer products. Many usability evaluation methods have been promoted as the ideal. Few, however, fulfill expectations concerning effectiveness. Additionally, lack of empirical data forces usability practitioners to rely on personal judgments and/or anecdotal statements when deciding upon which usability method best suits their needs. Therefore the present study had two principal objectives: (1) to validate a hybrid usability technique that identifies important and ignores inconsequential usability problems, and (2) to provide empirical performance data for several usability protocols on a variety of contemporary comparative metrics. The User-Centered Hybrid Assessment Tool (User-CHAT) was developed to maximize efficient diagnosis of usability issues from a behaviorally-based perspective while minimizing time and resource limitations typically associated with usability assessment environments. Several characteristics of user-testing, the heuristic evaluation, and the cognitive walkthrough were combined to create the User-CHAT. Prior research has demonstrated that the User-CHAT supports an evaluation within 3-4 hrs, can be used by individuals with limited human factors / usability background, and requires little training to be used competently, even for complex systems. A state-of-the-art suite of avionics displays and a series of benchmark tasks provided the context where the User-CHAT’s performance was measured relative to its parent usability methods. Two techniques generated comparison lists of usability problems – user-testing data and various inclusion criteria for usability problems identified by the User-CHAT, heuristic evaluation, and cognitive walkthrough. Overall the results demonstrated that the User-CHAT attained higher effectiveness scores than the heuristic evaluation and cognitive walkthrough, suggesting that it helped evaluators identify many usability problems that actually impact users, i.e., higher thoroughness, while attenuating time and effort on issues that were not important, i.e., higher validity. Furthermore, the User-CHAT had the greatest proportion of usability problems that were rated as serious, i.e., usability issues that hinder performance and compromise safety. The User-CHAT’s performance suggests that it is an appropriate usability technique to implement into the product development lifecycle. Limitations and future research directions are discussed.
|
420 |
Optimal topology design for virtual networksYoussef, Mina Nabil January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina M. Scoglio / Recently, virtualization was proposed in many scientific fields. Virtualization is widely applied in telecommunications where networks are required to be extremely flexible to meet the current and the unpredictable future requirements. The creation of a virtual network over the physical network allows the application developers to design new services provided to the users without modifying the underlay resources. The creation of a virtual network of light paths and light trees over the optical network allows the resources managers to utilize the huge optical capacity more efficiently.
In this thesis, we consider the optimal topology design for the virtual networks taking into consideration traffic demands and quality of service constraints of the applications. Considered examples of virtual networks are the overlay networks at the application layer and the virtual light path and light tree networks at the optical layer.
In the design of overlay topologies, the performance of the virtual networks is affected by traffic characteristic, and behavior of nodes which can be selfish or cooperative. Both the static and dynamic traffic demand scenarios are considered. The static demand scenario follows well known probability distributions, while in the dynamic traffic scenario, the traffic matrix is predicted through measurements over each link in the network. We study the problem of finding the overlay topology that minimizes a cost function which takes into account the overlay link creation cost and the routing cost. We formulate the problem as an Integer Linear Programming and propose heuristics to find near-optimal overlay topologies with a reduced complexity.
Virtual optical networks are designed to support many applications. Multicast sessions are an example of the applications running over the optical network. The main objective in creating the hybrid topology, composed by light paths and light trees, is to increase number of supported multicast sessions through sharing the network resources. The problem of establishing the hybrid topology is formulated using the Integer Linear Programming. Extensive data results and analysis are performed on the generated hybrid topologies for evaluation.
|
Page generated in 0.0675 seconds