• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 30
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 103
  • 103
  • 22
  • 18
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Mobile application rating based on AHP and FCEM : Using AHP and FCEM in mobile application features rating

FU, YU January 2017 (has links)
Context. Software evaluation is a research hotspot of both academia and industry. Users as the ultimate beneficiary of software products, their evaluation becomes more and more importance. In the real word, the users’ evaluation outcomes as the reference for end-users selecting products, and for project managers comparing their product with competitive products. A mobile application is a special software, which is facing the same situation. It is necessary to find and test an evaluation method for a mobile application which based on users’ feedback and give more reference for different stakeholders. Objectives. The aim of this thesis is to apply and evaluate AF in mobile application features rating. There are three kinds of people, and three processes are involved in a rating method applying process, rating designers in rating design process, rating providers in the rating process, and end-users in selecting process. Each process has the corresponding research objectives and research questions to test the applicability of AF method and the satisfaction of using AF and using AF rating outcomes. Methods. The research method of this thesis is a mixed method. The thesis combined experiment, questionnaire, and interview to achieve the research aim. The experiment is using for constructing a rating environment to simulate mobile application evaluation in the real world and test the applicability of AF method. Questionnaire as a supporting method utilizing for collecting the ratings from rating providers. And interviews are used for getting the satisfaction feedback of rating providers and end-users. Results. In this thesis, all AF use conditions are met, and AF evaluation system can be built in mobile application features rating. Comparing with existing method rating outcomes, the rating outcomes of AF are correct and complete. Although, the good feelings of end-users using AF rating outcomes to selecting a product, due to the complex rating process and heavy time cost, the satisfaction of rating providers is negative. Conclusions. AF can be used in mobile application features rating. Although there are many obvious advantages likes more scientific features weight, and more rating outcomes for different stakeholders, there are also shortages to improve such as complex rating process, heavy time cost, and bad information presentation. There is no evidence AF can reply the existing rating method in apps stores. However, there is still research value of AF in future work.
32

Essays on quality evaluation and bidding behavior in public procurement auctions

Stake, Johan Y. January 2015 (has links)
In this dissertation, I investigate how different aspects of the procurement process and evaluation affect bidding behavior. In essay 1, we attempt to map public procurements in Sweden by gathering a representative sample of procurements. We find that framework agreements and multiple-contract procurements represent a very large share of total government spending. The total value procured by government authorities, municipalities and counties accounts to 215 BSEK yearly, which we believe is an underestimate due to data issues. Essay 2 suggests a simple method for of estimating bidding costs in public procurement, and are empirically estimated to be approximately 2 percent of the procurement value using a comprehensive dataset and approximately 0.5 percent for a more homogeneous road re-pavement dataset. Our method provides reasonable estimates with, compared to other methods, relatively low data requirements. Essay 3 investigates the effect of quality evaluation on small and medium-sized enterprises (SMEs). Contrary to common belief, SMEs’ participation does not increase when evaluating quality, and their probability to win procurements decreases compared with that of large firms. In essay 4, the bidders’ decision to apply for a procurement review “appeal” is investigated. Contrary to procurers’ beliefs, evaluating quality is found not to have any statistically significant effect on the probability of appeals. Instead, I empirically confirm theoretical prediction of the 1st runner-up’s decision to claim the evaluation to be redone, as well as free-riding in appealing. In essay 5, we test whether spatial econometrics can be used to test for collusion in procurement data. We apply this method on a known cartel and test during and after the period the cartel was active. Our estimates support the proposition that spatial econometrics can be used to test for collusive behavior.
33

[en] TRANSLATION MEMORY: AID OR HANDICAP? / [pt] MEMÓRIA DE TRADUÇÃO: AUXÍLIO OU EMPECILHO?

ADRIANA CESCHIN RIECHE 04 June 2004 (has links)
[pt] Diante do papel cada vez mais importante desempenhado pelas ferramentas de auxílio à tradução no trabalho de tradutores profissionais, a discussão das conseqüências de sua utilização assume especial interesse. O presente estudo concentra-se em apenas uma dessas ferramentas: os sistemas de memória de tradução, que surgiram prometendo ganhos de produtividade, maior consistência e economia. O objetivo é analisar os principais fatores que levam a problemas de qualidade nesses sistemas e apresentar sugestões para melhorar o controle da qualidade realizado, ressaltando a necessidade de manutenção e revisão das memórias para que realmente sirvam ao propósito de serem ferramentas e não empecilhos para o tradutor. Essas questões serão analisadas no contexto do mercado de localização de software, segmento em que as memórias de tradução são amplamente utilizadas, à luz das abordagens contemporâneas sobre qualidade da tradução. / [en] Considering the increasingly important role played by computer-aided translation tools in the work of professional translators, the discussion about their use gains special interest. This study focuses on only one of these tools: translation memory systems, which were developed to ensure productivity gains, more consistency and cost savings. The objective is to analyze the major factors leading to quality problems in such systems and to suggest ways to enhance quality control, emphasizing the need for updating and reviewing the translation memories so that they can actually serve as translation aids rather than handicaps. These issues will be analyzed in the context of the software localization market, a segment in which translation memories are widely used, in the light of contemporary approaches to translation quality assessment.
34

Synthesis, Coding, and Evaluation of 3D Images Based on Integral Imaging

Olsson, Roger January 2008 (has links)
In recent years camera prototypes based on Integral Imaging (II) have emerged that are capable of capturing three-dimensional (3D) images. When being viewed on a 3D display, these II-pictures convey depth and content that realistically change perspective as the viewer changes the viewing position. The dissertation concentrates on three restraining factors concerning II-picture progress. Firstly, there is a lack of digital II-pictures available for inter alia comparative research and coding scheme development. Secondly, there is an absence of objective quality metrics that explicitly measure distortion with respect to the II-picture properties: depth and view-angle dependency. Thirdly, low coding efficiencies are achieved when present image coding standards are applied to II-pictures. A computer synthesis method has been developed, which enables the production of different II-picture types. An II-camera model forms a basis and is combined with a scene description language that allows for the describing of arbitrary complex virtual scenes. The light transport within the scene and into the II-camera is simulated using ray-tracing and geometrical optics. A number of II-camera models, scene descriptions, and II-pictures are produced using the presented method. Two quality evaluation metrics have been constructed to objectively quantify the distortion contained in an II-picture with respect to its specific properties. The first metric models how the distortion is perceived by a viewer watching an II-display from different viewing-angles. The second metric estimates the depth-distribution of the distortion. New aspects of coding-induced artifacts within the II-picture are revealed using the proposed metrics. Finally, a coding scheme for II-pictures has been developed that inter alia utilizes the video coding standard H.264/AVC by firstly transforming the II-picture into a pseudo video sequence. The properties of the coding scheme have been studied in detail and compared with other coding schemes using the proposed evaluation metrics. The proposed coding scheme achieves the same quality as JPEG2000 at approximately 1/60th of the storage- or distribution requirements. / De senaste åren har kameraprototyper som kan fånga tredimensionella (3D) bilder presenterats, baserade på 3D-tekniken Integral Imaging (II). När dessa II-bilder betraktas på en 3D-skärm, delger de både ett djup och ett innehåll som på ett realistiskt sätt ändrar perspektiv när tittaren ändrar sin betraktningsposition. Avhandlingen koncentrerar sig på tre återhållande faktorer gällande II-bilder. För det första finns det en mycket begränsad allmän tillgång till II-bilder för jämförande forskning och utveckling av kodningsmetoder. Det finns heller inga objektiva kvalitetsmått som uttryckligen mäter distorsion med avseende på II-bildens egenskaper: djup och betraktningsvinkelberoende. Slutligen uppnår nuvarande standarder för bildkodning låg kodningseffektivitet när de appliceras på II-bilder. En metod baserad på datorrendrering har utvecklats som tillåter produktion av olika typer av II-bilder. En II-kameramodel ingår som bas, kombinerat med ett scenbeskrivningsspråk som möjligör att godtydligt komplexa virtuella scener definieras. Ljustransporten inom scenen och fram till II-kameran simuleras med strålföljning och geometrisk optik. Den presenterade metoden används för att skapa ett antal II-kameramodeller, scendefinitioner och II-bilder. Två kvalitetmått har tagits fram för att objektivt kvantifiera distorsion som kan uppträda i en II-bild med avseende på dess specifika egenskaper. Det första måttet modellerar hur distortionen uppfattas av en tittare som betraktar en 3D-skärm ur olika betraktningsvinklar. Det andra måttet beräknar distorsionens djupdistribution inom II-bilden. Nya aspekter av kodningsinducerade artefakter påvisas med de föreslagna kvalitetsmåtten. Slutligen har en kodningsmetod för II-bilder utarbetats som bland annat utnyttjar videokodningsstandarden H.264/AVC genom att först transformera II-bilden till en pseudovideosekvens (PVS). Kodningsmetodens egenskaper har studerats i detalj och jämförts med andra kodningsmetoder, bland annat med hjälp av de föreslagna kvalitetsmåtten. Den föreslagna kodningsmetoden åstadkommer samma kvalitet som JPEG2000 till ungefärligen 1/60-del av kraven på lagring och distribution.
35

EVALUATION OF THE CONTRIBUTION METAGENOMIC SHOTGUN SEQUENCING HAS IN ASSESSING POLLUTION SOURCE AND DEFINING PUBLIC HEALTH AND ENVIRONMENTAL RISKS

Unknown Date (has links)
State-approved membrane filtration (MF) techniques for water quality assessments were contrasted with metagenomic shotgun sequencing (MSS) protocols to evaluate their efficacy in providing precise health-risk indices for surface waters. Using MSS, the relative numerical abundance of pathogenic bacteria, virulence and antibiotic resistance genes revealed the status and potential pollution sources in samples studied. Traditional culture methods (TCM) showed possible fecal contamination, while MSS clearly distinguished between fecal and environmental bacteria contamination sources, and pinpointed actual risks from pathogens. RNA MSS to detect all viable microorganisms and qPCR of fecal biomarkers were used to assess the possible environmental risk between runoff drainage canals and a swamp area with no anthropogenic impact. Results revealed higher levels of pathogenic bacteria, viruses, and virulence and antibiotic resistance genes in the canal samples. The data underscore the potential utility of MSS in precision risk assessment for public and biodiversity health and tracking of environmental microbiome shifts by field managers and policy makers. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
36

Implementace metriky pro hodnocení kvality videosekvencí / Implementation of a metric for video quality evaluation

Kachlík, Miloš January 2012 (has links)
The aim of this work is to create a program able to implement the metric CPqD-IES to evaluate the quality of video in system MATLAB. This metric is described in Recommendation ITU-R BT.1683 for objective video quality measurement techniques for standard definition digital broadcast television with presence of a full reference. Video quality assessment is calculated objective parameters based on image segmentation. Natural scenes are segmented into edge, plane and texture regions. Objective parameters are assigned to each of these contexts. The relationship between each objective parameter and a subjective impairment level is approximated by the logistic curve, which is resulting in an estimated impairment level for each parameter.
37

Visualisation and Generalisation of 3D City Models

Mao, Bo January 2010 (has links)
3D city models have been widely used in different applications such as urban planning, traffic control, disaster management etc. Effective visualisation of 3D city models in various scales is one of the pivotal techniques to implement these applications. In this thesis, a framework is proposed to visualise the 3D city models both online and offline using City Geography Makeup Language (CityGML) and Extensible 3D (X3D) to represent and present the models. Then, generalisation methods are studied and tailored to create 3D city scenes in multi-scale dynamically. Finally, the quality of generalised 3D city models is evaluated by measuring the visual similarity from the original models.   In the proposed visualisation framework, 3D city models are stored in CityGML format which supports both geometric and semantic information. These CityGML files are parsed to create 3D scenes and be visualised with existing 3D standard. Because the input and output in the framework are all standardised, it is possible to integrate city models from different sources and visualise them through the different viewers.   Considering the complexity of the city objects, generalisation methods are studied to simplify the city models and increase the visualisation efficiency. In this thesis, the aggregation and typification methods are improved to simplify the 3D city models.   Multiple representation data structures are required to store the generalisation information for dynamic visualisation. One of these is the CityTree, a novel structure to represent building group, which is tested for building aggregation. Meanwhile, Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. According to the experiments results, by using the CityTree, the generalised 3D city model creation time is reduced by more than 50%.   Different generalisation strategies lead to different outcomes. It is important to evaluate the quality of the generalised models. In this thesis a new evaluation method is proposed: visual features of the 3D city models are represented by Attributed Relation Graph (ARG) and their similarity distances are calculated with Nested Earth Mover’s Distance (NEMD) algorithm. The calculation results and user survey show that the ARG and NEMD methods can reflect the visual similarity between generalised city models and the original ones. / QC 20100923 / ViSuCity Project
38

Data quality for the decision of the ambient systems / Qualité des données pour la décision des systèmes ambiants

Kara, Madjid 14 March 2018 (has links)
La qualité des données est une condition commune à tous les projets de technologie de l'information, elle est devenue un domaine de recherche complexe avec la multiplicité et l’expansion des différentes sources de données. Des chercheurs se sont penchés sur l’axe de la modélisation et l’évaluation des données, plusieurs approches ont été proposées mais elles étaient limitées à un domaine d’utilisation bien précis et n’offraient pas un profil de qualité nous permettant d’évaluer un modèle de qualité de données global. L’évaluation basée sur les modèles de qualité ISO a fait son apparition, néanmoins ces modèles ne nous guident pas pour leurs utilisation, le fait de devoir les adapter à chaque cas de figure sans avoir de méthodes précises. Notre travail se focalise sur les problèmes de la qualité des données d'un système ambiant où les contraintes de temps pour la prise de décision sont plus importantes par rapport aux applications traditionnelles. L'objectif principal est de fournir au système décisionnel une vision très spécifique de la qualité des données issues des capteurs. Nous identifions les aspects quantifiables des données capteurs pour les relier aux métriques appropriées de notre modèle de qualité de données spécifique. Notre travail présente les contributions suivantes : (i) création d’un modèle de qualité de données générique basé sur plusieurs standards de qualité existants, (ii) formalisation du modèle de qualité sous forme d’une ontologie qui nous permet l’intégration de ces modèles (de i), en spécifiant les différents liens, appelés relations d'équivalence, qui existent entre les critères composant ces modèles, (iii) proposition d’un algorithme d’instanciation pour extraire le modèle de qualité de données spécifique à partir du modèle de qualité de données générique, (iv) proposition d’une approche d’évaluation globale du modèle de qualité de données spécifique en utilisant deux processus, le premier processus consiste à exécuter les métriques reliées aux données capteurs et le deuxième processus récupère le résultat de cette exécution et utilise le principe de la logique floue pour l’évaluation des facteurs de qualité de notre modèle de qualité de données spécifique. Puis, l'expert établie des valeurs représentant le poids de chaque facteur en se basant sur la table d'interdépendance pour prendre en compte l'interaction entre les différents critères de données et on utilisera la procédure d'agrégation pour obtenir un degré de confiance. En ce basant sur ce résultat final, le composant décisionnel fera une analyse puis prendra une décision. / Data quality is a common condition to all information technology projects; it has become a complex research domain with the multiplicity and expansion of different data sources. Researchers have studied the axis of modeling and evaluating data, several approaches have been proposed but they are limited to a specific use field and did not offer a quality profile enabling us to evaluate a global quality model. The evaluation based on ISO quality models has emerged; however, these models do not guide us for their use, having to adapt them to each scenario without precise methods. Our work focuses on the data quality issues of an ambient system where the time constraints for decision-making is greater compared to traditional applications. The main objective is to provide the decision-making system with a very specific view of the sensors data quality. We identify the quantifiable aspects of sensors data to link them to the appropriate metrics of our specified data quality model. Our work presents the following contributions: (i) creating a generic data quality model based on several existing data quality standards, (ii) formalizing the data quality models under an ontology, which allows integrating them (of i) by specifying various links, named equivalence relations between the criteria composing these models, (iii) proposing an instantiation algorithm to extract the specified data quality model from the generic data quality models, (iv) proposing a global evaluation approach of the specified data quality model using two processes, the first one consists in executing the metrics based on sensors data and the second one recovers the result of the first process and uses the concept of fuzzy logic to evaluate the factors of our specified data quality model. Then, the expert defines weight values based on the interdependence table of the model to take account the interaction between criteria and use the aggregation procedure to get a degree of confidence value. Based on the final result, the decisional component makes an analysis to make a decision.
39

Identification of Phytochemical Markers for Quality Evaluation of Tree Peony Stamen Using Comprehensive HPLC-Based Analysis

Xie, Lihang, Yan, Zhenguo, Li, Mengchen, Tian, Yao, Kilaru, Aruna, Niu, Lixin, Zhang, Yanlong 15 October 2020 (has links)
Stamen from Paeonia ostii 'Fengdan Bai' and Paeonia rockii is rich in phenolic compounds and popularly used as tea materials with various pharmaceutical functions. In order to investigate whether stamen from other tree peony cultivars could be used as a natural antioxidant, the quality of stamen from thirty-five cultivars collected from the same garden was evaluated based on their phenolic composition and content by high-performance liquid chromatography analysis and in vitro antioxidant properties coupled with comprehensive chemometrics analysis. The results revealed that phenolic contents and antioxidant capacities of tree peony stamen were unique and cultivar dependent. Stamen from 'Zi Erqiao' exhibited the highest total phenolic and flavonol content, and strongest antioxidant activities, while that of 'Fengdan Bai' and P. rockii were at below-average levels among test samples. Further, thirty-seven cultivars of tree peony were divided into three major groups with a significant difference in total metabolites content and antioxidant properties, which were mainly contributed by six phytochemical compounds. Among these, naringin and benzoylpaeoniflorin were found to be critical chemical markers for the identification of tree peony stamen with high quality by chemometric analysis. Moreover, correlation analysis suggested that stamen from the earlier flowering cultivars with the hidden pistil, double petal, shorter thrum, more carpel and volume were possibly of higher quality. Together, cultivars with stamen enriched in phenolics and antioxidants properties, and their relevant critical phenotypic and phytochemical traits were screened out. This study would benefit the rapid identification of tree peony stamen with high quality and provide a valuable reference for its development and utilization as functional foods and pharmaceutical resources.
40

Quality evaluation of PVD coatings on cutting tools by micro-blasting

Berling, Victor January 2016 (has links)
Sandvik Coromant in Gimo is in need of a good quality evaluation method of adhesion for PVD layers; since the existing testing method is located in Sandvik Coromant’s other location in Västberga. Different micro-blasting methods were investigated in this thesis and the results show that some of the methods could potentially be used for evaluation of adhesion, more specifically the wet blasting methods, M1 and M2. The results also show that the investigated geometry received varying adhesion quality when lower etching bias in the PVD process was used and when different sides was pointing upwards in the PVD furnace. Further investigation will have to be made in order to fully implement micro-blasting as a testing method in production.

Page generated in 0.1219 seconds