• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 145
  • 44
  • 20
  • 14
  • 11
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 69
  • 39
  • 32
  • 31
  • 30
  • 29
  • 29
  • 28
  • 26
  • 26
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Texture Structure Analysis

January 2014 (has links)
abstract: Texture analysis plays an important role in applications like automated pattern inspection, image and video compression, content-based image retrieval, remote-sensing, medical imaging and document processing, to name a few. Texture Structure Analysis is the process of studying the structure present in the textures. This structure can be expressed in terms of perceived regularity. Our human visual system (HVS) uses the perceived regularity as one of the important pre-attentive cues in low-level image understanding. Similar to the HVS, image processing and computer vision systems can make fast and efficient decisions if they can quantify this regularity automatically. In this work, the problem of quantifying the degree of perceived regularity when looking at an arbitrary texture is introduced and addressed. One key contribution of this work is in proposing an objective no-reference perceptual texture regularity metric based on visual saliency. Other key contributions include an adaptive texture synthesis method based on texture regularity, and a low-complexity reduced-reference visual quality metric for assessing the quality of synthesized textures. In order to use the best performing visual attention model on textures, the performance of the most popular visual attention models to predict the visual saliency on textures is evaluated. Since there is no publicly available database with ground-truth saliency maps on images with exclusive texture content, a new eye-tracking database is systematically built. Using the Visual Saliency Map (VSM) generated by the best visual attention model, the proposed texture regularity metric is computed. The proposed metric is based on the observation that VSM characteristics differ between textures of differing regularity. The proposed texture regularity metric is based on two texture regularity scores, namely a textural similarity score and a spatial distribution score. In order to evaluate the performance of the proposed regularity metric, a texture regularity database called RegTEX, is built as a part of this work. It is shown through subjective testing that the proposed metric has a strong correlation with the Mean Opinion Score (MOS) for the perceived regularity of textures. The proposed method is also shown to be robust to geometric and photometric transformations and outperforms some of the popular texture regularity metrics in predicting the perceived regularity. The impact of the proposed metric to improve the performance of many image-processing applications is also presented. The influence of the perceived texture regularity on the perceptual quality of synthesized textures is demonstrated through building a synthesized textures database named SynTEX. It is shown through subjective testing that textures with different degrees of perceived regularities exhibit different degrees of vulnerability to artifacts resulting from different texture synthesis approaches. This work also proposes an algorithm for adaptively selecting the appropriate texture synthesis method based on the perceived regularity of the original texture. A reduced-reference texture quality metric for texture synthesis is also proposed as part of this work. The metric is based on the change in perceived regularity and the change in perceived granularity between the original and the synthesized textures. The perceived granularity is quantified through a new granularity metric that is proposed in this work. It is shown through subjective testing that the proposed quality metric, using just 2 parameters, has a strong correlation with the MOS for the fidelity of synthesized textures and outperforms the state-of-the-art full-reference quality metrics on 3 different texture databases. Finally, the ability of the proposed regularity metric in predicting the perceived degradation of textures due to compression and blur artifacts is also established. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
82

När går bussen? : En studie kring metoder för kvalitetsbedömning av SL:s bussavgångsprognoser

Karlsson, Gustav, Lillo, Gustav January 2017 (has links)
As a result of a growing population, the city of Stockholm is facing many challenges. Getting more people to travel by public transportation is a key factor in coping with this increased urbanization. In the strive for increased ridership, it is the Stockholm Public Transport Administration’s (SL) job to make sure that the services provided are of high quality. One of these services is the real time bus departure predictions provided to the travellers through digital signs or by web and mobile applications. Due to a lack of proper tools, SL has unfortunately not yet been able to establish a systematic assessment of the quality of these bus prediction. The goal of this study was to help SL find such tools and solutions for assessing the quality of bus predictions. More specifically, the purpose of the study was to investigate the concept of prediction quality and identify suitable statistical tools for measuring quality. In order to do this a comprehensive literature study has been conducted. The findings of the literature study were then tested in practice in order to answer how such quality measurements should be made in the context of SL’s ITinfrastructure. This was answered by carrying out a pilot study in which the prediction quality was assessed on data from one week for a specific bus line. From the initial literature study, it was concluded that there are many dimensions that potentially affect the traveller’s perception of bus prediction quality. However, it was also concluded that a quality assessment plausibly should start with an evaluation of the precision. In order to assess the precision, several types of descriptive measures and analytical perspectives were proposed. As of how these findings should be made in the context of SL’s IT-systems, a method for creating observations from the available prediction data was presented. It was also concluded that in order to mirror the travellers experience, the prediction data should be collected “late” in the process of bus prediction generation.
83

Testes de toxicidade como instrumento na avaliação dos sedimentos de água doce do Estado de São Paulo / Toxicity tests as a tool to the asessment of São Paulo State freshwater sediments

Rosalina Pereira de Almeida Araujo 18 October 2005 (has links)
A necessidade de se considerar o sedimento na análise da qualidade de corpos de água motivou a realização desse estudo, visando contribuir para o estabelecimento de protocolos de testes com o anfípoda Hyalellla, critérios para a avaliação da toxicidade de sedimentos de água doce e um quadro da situação atual das principais bacias do Estado de São Paulo, em termos ecotoxicológicos. Desta forma, inicialmente, foi comparada a sensibilidade de duas espécies de Hyalella, ou seja H. azteca e Hyalella sp., adotando diferentes substâncias e sedimentos. Também comparou-se a taxa de fecundidade e sobrevivência destas duas espécies em determinadas condições de cultivo. Após a escolha da espécie teste mais adequada, Hyalella azteca, foram avaliadas diferentes condições de cultivo (tipo e quantidade de alimento) e de ensaio (sistema estático e semi-estático, razão de sedimento e água 1:4 e 1:2, os critério de avaliação sobrevivência e crescimento) com amostras de sedimento, apresentando diferentes graus de contaminação. Esse estudo permitiu estabelecer uma condição de cultivo (100 organismos em recipientes com 2,5L de água natural ou reconstituída, a planta aquática Elódea como substrato e, como alimento, ração de coelho granulada mais uma solução de ração de peixe digerida, levedura e óleo de prímula). Esta condição permitiu obter um número médio de jovens/fêmea/semana de H. azteca de 9,2 com um desvio padrão de 2,7. Para avaliar a condição de ensaio que melhor representaria as do ambiente, os resultados dos testes de toxicidade com Hyalella azteca foram comparados com dados químicos e da comunidade bentônica, de amostras coletadas no mesmo local e data. Dessa forma verificou-se que a melhor condição de teste de toxicidade, com a duração de 10 dias, com H. azteca foi em sistema semi-estático com trocas de água a cada dois dias, adotando a razão de sedimento e água de 1:2 e avaliando a mortalidade e o crescimento. A partir desses dados, foram elaborados critérios que expressam classes de qualidade de sedimentos, ou seja: bom, quando o sedimento não apresentou toxicidade; regular, efeito sub-letal (redução do crescimento); ruim, mortalidade <50% e péssimo, mortalidade >=50%. Esse critério foi aplicado nos estudos realizados para avaliação da qualidade dos sedimentos em 12 das 22 das Unidades de Gerenciamento de Recursos Hídricos, para os quais foram compilados e selecionados dados ecotoxicológicos, além de químicos e da comunidade bentônica, quando disponíveis. A análise integrada desses resultados, utilizando classes de qualidade para as variáveis químicas, ecotoxicológicas e índices para a comunidade bentônica, permitiu estabelecer uma melhor avaliação da qualidade dos sedimentos. Além disso, verificou-se a importância de se integrar outros dados, como deformidade em Chironomus e teste de mutagenicidade, para se confirmar ou não a presença e estabelecer possíveis grupos de compostos, que poderiam estar causando impactos na comunidade de organismos que vivem no sedimento. A integração dessas diferentes linhas de evidências é que permitiu o estabelecimento do diagnóstico ou das análises a serem realizadas para se determinar o tipo de agentes estressores que possam estar presentes em um dado local em estudo. Portanto, testes de toxicidade se mostraram úteis e necessários na caracterização e em estudos para avaliar e identificar a qualidade de sedimentos, e devem ser adotados no monitoramento, junto com outras variáveis. / The need to include the sediment evaluation in the quality assessment of surface waters, motivated this study, in order to contribute to the establishment of testing protocols with the amphipod Hyalella, to toxicity evaluation criteria. It was also motivated by a lack of a survey of the São Paulo State freshwater quality sediment watersheds situation in terms of toxicity. Initially, the sensitivity of two Hyallela species, H. azteca and Hyalella sp. (previously named H. meinerti), were compared using different substances and sediment samples. The rates of fecundity and survival of these two species were compared in standardized culture conditions. After the selection of the most suitable species, Hyalella azteca, different culturing conditions (food type and quantity) and assays design (semi-static and static system, water/sediment ratio 1:2 and 1:4, evaluation criteria for survival and growth) were studied using sediment with different contamination levels. The best culturing conditions were: 100 organisms/2.5 liters of natural or reconstituted water; the aquatic plant Elodea as substrate and rabbit granulated food plus a mixture of digested fish food, yeast and primula oil. Adopting this culturing conditions it was possible to obtain 9.2 juvenile Hyalella azteca for each female/week with a standard error of 2.7 for around three months. In order to evaluate the best test conditions, the toxicity tests results were compared with chemical analysis and benthic community data. These results were obtained with samples collected in the same sites and at the same time. The analyses of the results showed that the best condition for 10 days exposition time was semi-static system with water exchange every two days, 1:2 sediment/water ratio and evaluation of mortality and growth as endpoints. Based in these results, a toxicity criteria that express sediment quality classes were elaborate. The classes established were: good, when the sediment was non toxic, regular when sublethal effect were observed (growth reduction), bad, when the mortality was less than 50% and extremely bad, when mortality was equal or greater than 50%. These criteria were applied to analyze toxicity data from different sediment quality studies performed in 12 from the 22 Freshwater Watershed Management Units of São Paulo State that were compiled and selected, using ecototoxicological, chemical and benthic community data when available. Only using sediment classes for these three variables it was possible to establish the sediment quality of the survey data. Other variables (benthic deformities and mutagenicity) were considered important to confirm, or not, the presence and establish the possible chemical groups that could be causing effects on benthic organisms. Only by the integration of these different evidence lines it was possible to define the sediment quality or which analyses should to be done in order to point out the stressor types that could be present at the studied sites. The conclusion was that toxicity tests with aquatic organisms are reliable and necessary for the quality evaluation and identification of toxicity of the sediments and should be used in monitoring studies together with other tools.
84

Proposta de metodologia para avaliação de redes de voz sobre IP / Proposal of Methodology for Evaluation of Voice over IP Networks

Silva, Vandersilvio da January 2006 (has links)
A redução de custo com telefonia através do uso de voz sobre IP tem disparado a busca de soluções que transformem redes IP originalmente dedicadas a transporte de dados em redes para transporte de voz. Esta dissertação tem por objetivo apresentar uma metodologia para sistematizar a avaliação de redes para o tráfego de voz sobre IP de acordo com as possibilidades disponíveis no cenário a ser avaliado. Inicialmente é dada uma visão geral de voz sobre IP, apresentando os protocolos utilizados, os fatores que influenciam na qualidade da voz e os métodos de avaliação de qualidade da voz. Na seqüência são apresentados trabalhos correlatos a avaliação de qualidade de aplicações de voz sobre IP. E por fim descreve-se a proposta de uma metodologia para sistematizar a avaliação de redes com VoIP. / The use of voice over IP telephony was started with solutions to adapt existent data networks to carrier voice streams. The use of monitoring techniques, QoS and signaling protocols can be combined on a such design. Our goal is to present a methodology to evaluate and choose the probing points and the voice quality evaluation techniques to be used in network redesign. An overview about VoIP protocols and parameters that change the voice quality are presented as well as some related works on evaluating voice quality based on network parameters. A proposed methodology is presented, with a case study to show how one can choose the right combination of probing points with some voice quality measurement technique.
85

Avaliação e recomendação de colaborações em redes sociais acadêmicas / Evaluation and recommendation of collaborations on academic social networks

Lopes, Giseli Rabello January 2012 (has links)
No contexto acadêmico o trabalho de pesquisa científica, nas áreas tecnológicas, é efetuado através de colaborações e cooperações entre diferentes pesquisadores e grupos de pesquisa. Existem pesquisadores atuando nos mais variados assuntos e nas mais diversas subáreas de pesquisa. Para analisar e expandir tais colaborações, muitas vezes, é necessário avaliar o nível de cooperação dos atuais parceiros, bem como identificar novos parceiros para conduzir trabalhos conjuntos. Tal avaliação e identificação não são tarefas triviais. Dessa forma, abordagens para avaliação e recomendação de colaborações são de grande valia para o aperfeiçoamento da cooperação e consequente melhoria da qualidade da pesquisa. Em relação à análise de colaborações, a demanda por critérios de avaliação de qualidade e por métodos de avaliação associados está aumentando e tem sido foco de muitos estudos na última década. Esse crescimento surge devido à busca por excelência acadêmica e para o apoio à tomada de decisões por parte de agências de financiamento para a alocação de recursos. Nesse contexto, há uma tendência a empregar técnicas bibliométricas, especialmente métodos estatísticos aplicados a citações. Com tanto material sendo pesquisado e publicado, resolveu-se explorar outra faceta para definição de indicadores de qualidade no contexto acadêmico visando a obtenção de resultados complementares e que garantam, através de sua validação experimental, uma melhor geração de indicadores. Desse modo, nesta tese, utiliza-se a tendência atual de estudos em análises de redes sociais, definindo métricas sociais específicas para definição de tais indicadores. Neste trabalho, é apresentada uma função para avaliação de qualidade de grupos de pesquisa com base nas colaborações internas entre seus pesquisadores membros. Estas colaborações são avaliadas através de análises em redes sociais bibliográficas acadêmicas baseadas em métricas de interação social. Com relação à identificação ou recomendação de colaborações, esta tese apresenta uma abordagem que considera tanto a parte de conteúdo quanto a de estrutura de uma rede. Especificamente, o conteúdo envolve a correlação entre os pesquisadores por áreas de pesquisa, enquanto a estrutura inclui a análise da existência de relacionamentos prévios entre os pesquisadores. Grande parte das abordagens que efetuam a recomendação de colaborações foca em recomendar especialistas em uma determinada área ou informação. Essas não consideram a área de atuação do usuário alvo da recomendação, como no caso da abordagem apresentada nesta tese. Além disso, neste trabalho, a obtenção de informações sobre os relacionamentos entre usuários, para construção de uma rede social acadêmica, é feita de forma implícita, em dados sobre publicações obtidos de bibliotecas digitais. Utilizando tais dados, também é possível explorar aspectos temporais para ponderação desses relacionamentos, utilizando-os para fins de recomendação de colaborações. Não foram encontrados trabalhos prévios nesse sentido. A presente abordagem inclui a recomendação não só de novas colaborações, como também, a recomendação de intensificação de colaborações já existentes, o que não é considerado por outros trabalhos relacionados. Dessa forma, pode-se dizer que os objetivos de recomendação da presente abordagem são mais amplos. Após propor novas técnicas para avaliação e identificação de parcerias, esta tese as valida através de uma avaliação experimental. Especificamente, experimentos com dados reais sobre as relações de coautoria entre pesquisadores pertencentes a diferentes grupos de pesquisa são apresentados para avaliação e demonstração da validade e da aplicabilidade das diferentes proposições desta tese referentes à avaliação de qualidade e recomendação de colaborações. / In technological fields, scientific research is performed through collaboration and cooperation of different researchers and research groups. In order to analyze and expand such collaborations, it is necessary to evaluate the level of cooperation between current partners as well as to identify new partners. Such an analysis and identification are not trivial tasks. Thus, approaches to evaluating and recommending collaborations are valuable to improve cooperation and, hence, improve research quality. Regarding the collaborations evaluation, the demand for quality assessment criteria and associated evaluation methods is increasing. Indeed, such evaluations have been the focus of many studies in the last decade. This growth arises from the pursuit of academic excellence and decision making of funding agencies. In this context, the trend is to employ bibliometric techniques, especially citation statistics. With so much material being researched and published, another facet for defining quality indicators is explored. Our goal is to obtain additional results that ensure, through its experimental validation, a better indicators generation. In this thesis, the current trend of studies in social network analysis is applied in the definition of such indicators. Specifically, we introduce a function for quality assessment of research groups based on internal collaborations among their member researchers. These collaborations are evaluated through analysis on bibliometric academic social networks based on metrics of social interaction. Regarding the collaborations recommendation, this thesis presents an approach that considers both the content and structure of research networks. The content involves the correlation among researchers by research areas whereas the structure includes the analysis of existing relationships among researchers. Most of the approaches that perform the collaborations recommendation focus on recommending experts in a certain area or information. They do not consider the working area of the recommendation target user, as we do in this thesis. Moreover, here, the information about the researchers’ relationships, employed for building an academic social network, is implicitly obtained through publications data available in digital libraries. Moreover, we expand previous analysis by considering temporal aspects to determine the relationships weights (which may be used to collaborations recommendation purposes). There were no previous studies in this direction. Our approach includes not only the recommendation of new collaborations, but also the recommendation of the collaborations intensification, which is not considered by other related work. After proposing new techniques for evaluating and identifying research collaborators, this thesis validates it through an experimental evaluation. Specifically, we evaluate and demonstrate the applicability of our techniques considering real datasets on the co-author relationships among researchers from different research groups.
86

Applying Multi-criteria Decision Analysis for Software Quality Assessment

Goh, Wan Ai January 2010 (has links)
With the rapid advancement of technologies, software is gaining its popularity in assisting our daily activities in the last decades. This circumstance causes a rising concerns about a software product with high quality which lead to a question about the justification whether a software product has high quality. Therefore, a numerous of researches and studies had spent a lot of effort in software product quality assessment in order to justify whether the software product(s) under study have satisfactory quality. One of the foremost approaches to assess software product quality is the application of the quality models. For example, quality model ISO 9126. However, the quality models do not provide an explicit way to aggregate the performance of different quality aspects nor handling the various interests raised from different perspective or stakeholders. Although many studies have been conducted to aggregate the different measures of quality attributes, they are still not capable to include the various interests raised by different software product stakeholders. Therefore, some studies have attempted to apply MCDA methods in order to aggregate the measure of quality attributes as the ultimate software product quality and handling the various quality interests. However, they do not provide any rational about their particular choice of MCDA methods. Most of them justify their choice by referring to high popularity of the selected MCDA method. Without studying the suitability of MCDA methods in the application domain of the software product, it is difficult to conclude whether the chosen MCDA methods fit in the intended software engineering discipline. Furthermore, there is no systematic approach available to help other software practitioners in selecting the MCDA method that will be suitable for their needs and constraints in software product quality assessment. This thesis aims to provide the key concepts for an effective selection of suitable MCDA method for the purpose of software product quality assessment. A foremost part of this thesis presents two systematic reviews. The first review illustrates the evaluation of the characteristics of MCDA methods. The second review identifies the major needs and constraints of the software quality assessment potential MCDA method has to consider in order to be used for assessing quality of software products. Based on the results from both systematic reviews, a selection framework named MCDA-SQA framework is formulated. This framework is intended to assist the software practitioners to systematically select and adapt appropriate MCDA method(s) in order to fulfil their quality assessment needs and the respective environmental concerns.
87

Temporal data analysis facilitating recognition of enhanced patterns

Hönel, Sebastian January 2015 (has links)
Assessing the source code quality of software objectively requires a well-defined model. Due to the distinct nature of each and every project, the definition of such a model is specific to the underlying type of paradigms used. A definer can pick metrics from standard norms to define measurements for qualitative assessment. Software projects develop over time and a wide variety of re-factorings is applied tothe code which makes the process temporal. In this thesis the temporal model was enhanced using methods known from financial markets and further evaluated using artificial neural networks with the goal of improving the prediction precision by learning from more detailed patterns. Subject to research was also if the combination of technical analysis and machine learning is viable and how to blend them. An in-depth selection of applicable instruments and algorithms and extensive experiments were run to approximate answers. It was found that enhanced patterns are of value for further processing by neural networks. Technical analysis however was not able to improve the results, although it is assumed that it can for an appropriately sizedproblem set.
88

La notion de « qualité » des publications dans l’évaluation de la recherche et des chercheurs en sciences humaines et sociales: Le potentiel de l’Open Access pour dépasser le paradoxe des prescriptions en matière de qualité et l’ambivalence de leur perception par les chercheurs en sciences de la communication

Vanholsbeeck, Marc 26 February 2016 (has links)
1. Un premier apport de notre travail consiste à proposer un cadre théorique, analytique et conceptuel original, permettant d'approcher la notion de qualité des publications en SHS (sciences humaines et sociales) et en sciences de la communication de façon à la fois holistique et dynamique, en tant qu'elle fait l'objet de descriptions et de jugements multiples, émis par une diversité de parties prenantes, au sein et en dehors des milieux académiques. Pour ce faire, il s'agira de considérer la qualité dans ses différentes dimensions constitutives (approche holistique) tout en l'inscrivant dans le cadre d'évolutions tendancielles en matière de publication scientifique (approche dynamique) et en tenant compte de la qualité telle qu'elle est prescrite, souhaitée et mise en oeuvre par les différentes parties prenantes (chercheurs et entités prescriptrices, aux niveaux politique et managérial). En croisant de façon systématique ces trois approches - approche multidimensionnelle, rapport aux prescrits et aux souhaits, et étude des évolutions tendancielles -, il s’avérera possible d'évaluer l'incidence des différentes tendances en matière de publication scientifique – i.e. tendances à la massification, à l'internationalisation, à l' « exotérisation » (i.e. à l'ouverture vers le monde extérieur, au-delà des pairs), à la « gestionnarisation » (i.e. à l'usage des publications dans la gestion dela recherche et des chercheurs, en particulier en situation d'évaluation), à la commercialisation et à l' « enlignement » (i.e. à la mise en ligne, sur Internet) – ainsi que des prescriptions managériales et politiques qui les initient, les stimulent ou les prolongent à des degrés divers, sur la qualité de l'activité même de publier, et sur celle des différents types génériques et spécifiques d'objets publiés.2. En appliquant cette triple approche aux SHS et, plus particulièrement, au cas des sciences de la communication, nous montrerons comment la plupart des évolutions tendancielles qui sont discutées ici ainsi que des prescrits politiques et managériaux qui y affèrent aboutissent à valoriser principalement, en situation d'évaluation de la recherche et des chercheurs, la publication d'un grand nombre d'articles dans des revues savantes internationales de premier plan, destinés avant tout aux pairs, et à dévaloriser les publications, ouvertes à des publics plus locaux, rédigées en langue vernaculaire, ou qui se consacreraient à la résolution de problèmes de société. En particulier, à la faveur de la tendance à la « gestionnarisation » des publications, l'article de revue savante internationale de premier plan, ainsi que les citations qui lui sont faites par les seuls pairs, sont posés en indicateurs de performance de tout premier plan, « fixant » ainsi les pratiques de recherche et de publication des chercheurs. Cette « fixion » sera d'autant plus marquée que les indicateurs bibliométriques, à l'échelon national, seront intégrés à des processus de financement public de la recherche fondés sur les performances, et que, à l'échelon international, les indicateurs joueront un rôle prépondérant dans l'établissement des rankings des universités ainsi que des benchmarks des systèmes nationaux et régionaux de recherche. Pour autant, des prescriptions politiques sont également édictées, principalement au niveau européen, dans l'optique de la mise en oeuvre, au sein de l'Espace européen de la recherche et, dans une moindre mesure, de l'Espace européen de l'enseignement supérieur, d'une économie de la connaissance compétitive à l'échelon global et, plus particulièrement, d'un « mode 2 » de production des connaissances, qui insistent sur l'importance de davantage valoriser les résultats de la recherche, interdisciplinaire et coopérative, auprès de parties prenantes extra-académiques. En résulte une relation paradoxale entre la tendance à l'exotérisation de la recherche et des publications, et les prescrits de gestionnarisation des publications, ainsi qu'entre les prescriptions qui les sous-tendent respectivement.3. Or l'enquête que nous avons menée auprès des membres de trois sociétés savantes internationales en sciences de la communication montre combien les chercheurs de cette discipline ont désormais bien intégré les critères de qualité promus par les prescrits politiques et managériaux soutenant l'instauration d'une nouvelle « culture de la publication », à la croisée des tendances à la massification, à l'internationalisation et à la gestionnarisation des publications. Pour autant, des entretiens approfondis menés auprès de chercheurs en sciences de la communication actifs en Belgique francophone et néerlandophone n'en révèlent pas moins que ces derniers développent une attitude foncièrement ambivalente envers la culture du « publish or perish » et à l'égard de prescriptions qui sur-valorisent les revues savantes internationales de premier plan, en situation d'évaluation de la recherche et des chercheurs. D'une part, en effet, les chercheurs avec qui nous nous sommes entretenus estiment que la nouvelle culture de la publication joue un rôle bénéfique dans la professionnalisation et dans le développement d'une culture véritablement scientifique dans les sciences de la communication. Partant, la plupart d'entre eux développent des stratégies visant à aligner leurs pratiques de publication sur les prescrits. D'autre part, plusieurs répondants n'en regrettent pas moins le caractère réducteur de la survalorisation des revues savantes internationales de premier plan dans l'évaluation, et souhaitent qu'une plus grande diversité de types de publication soit prise en compte par les évaluateurs. Afin de concilier « qualité prescrite » et « qualité souhaitée » dans la qualité de leur activité effective de publication et dans celle des objets effectivement publiés (« qualité réelle »), il arrive dès lors à ces chercheurs de « bricoler » avec les prescriptions. Par ailleurs, la plupart des répondants, davantage cependant en FédérationWallonie-Bruxelles qu'en Flandre, où le financement public de la recherche est d'ores et déjà fondé en partie sur des indicateurs bibliométriques et revue-métriques, regrettent le manque d'explicite dans la formulation des prescriptions – ces dernières prenant régulièrement la forme de « scripts » plus indirects et/ou implicites, plutôt que de normes et de règles stricto sensu –, ainsi que l'absence de seuil quantitatif minimal à atteindre.4. Il nous semble par conséquent, dans une optique plus normative, que le dépôt systématique des différents types de publication produits par les chercheurs en SHS et en sciences de la communication sur des répertoires numériques institutionnels (Open Access Green) serait de nature à (contribuer à) résoudre le paradoxe des prescriptions en matière de « qualité prescrite », ainsi que l'ambivalence des perceptions des chercheurs en matière de « qualité souhaitée ». En effet, le dépôt des publications sur des répertoires institutionnels ouvre des opportunités inédites de renouveler la conversation savante qui se structure autour des objets publiés, au sein de la communauté argumentative (Kommunikationsgemeinschaft) des pairs, par le biais notamment de la revue par les pairs ouverte et grâce à la possibilité de commenter ad libitum les publications disséminées en Open Access. mais également en rendant les résultats de la recherche aisément accessibles et ré-utilisables par des parties prenantes extra-académiques. Les opportunités liées au dépôt des publications sur des répertoires Open Access (Green), en termes de qualité tant épistémique que pragmatiquede ces dernières, seront d'autant plus fécondes que le dépôt des travaux sur les répertoires institutionnels s'articulera à l'usage, par le chercheur, des instruments idoines, génériques ou dédiés, du Web participatif (Wikis, blogues, micro-blogues, réseaux sociaux, outils de partage de signets et de listes bibliographiques). Par ailleurs, les dépôts numériques fonctionnent désormais en tant qu'« outils de transparence », susceptibles de donner davantage de visibilité à des productions de recherche et des types de publication diversifiés. En situation d'évaluation de la recherche et des chercheurs, le recours aux dépôts institutionnels - pour autant qu'un mandat prescrive le dépôt de tous les travaux produits par les chercheurs de l'institution – permettrait aux évaluateurs de fonder leur jugement sur une gamme plus large et plus représentative de types de publication et de formes de communication en SHS et en sciences de la communication. De plus, grâce à la dissémination en Open Access, en conjonction avec l'usage d'une diversité d'outils du Web participatif, il devient mieux possible de soumettre les différents types de publication archivés et publiés en libre accès à des indicateurs de performance eux-mêmes diversifiés – bibliométriques, mais également « webométriques » et « altmétriques » -, fondés sur les articles plutôt que sur les revues et mieux adaptés à la diversité de leurs impacts, tant au sein qu'en dehors du cercle des pairs.5. Partant, l'Open Access (Green) nous apparaît in fine comme étant doté d'un potentiel important, en matière d'intégration de la recherche et des chercheurs en SHS et en sciences de la communication à la mise en place – au-delà d'une économie de la connaissance - d'une véritable société de la connaissance, ainsi qu'aux processus d'innovation techno-industrielle, sociale et intellectuelle qui la sous-tendent. / Doctorat en Information et communication / info:eu-repo/semantics/nonPublished
89

Comparison of Video Quality Assessment Methods

Jung, Agata January 2017 (has links)
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
90

Évaluation du contenu d'une image couleur par mesure basée pixel et classification par la théorie des fonctions de croyance / Evaluation of the content of a color image by pixel-based measure and classificationthrough the theory of belief functions

Guettari, Nadjib 10 July 2017 (has links)
De nos jours, il est devenu de plus en plus simple pour qui que ce soit de prendre des photos avec des appareils photo numériques, de télécharger ces images sur l'ordinateur et d'utiliser différents logiciels de traitement d'image pour appliquer des modification sur ces images (compression, débruitage, transmission, etc.). Cependant, ces traitements entraînent des dégradations qui influent sur la qualité visuelle de l'image. De plus, avec la généralisation de l'internet et la croissance de la messagerie électronique, des logiciels sophistiqués de retouche d'images se sont démocratisés permettant de falsifier des images à des fins légitimes ou malveillantes pour des communications confidentielles ou secrètes. Dans ce contexte, la stéganographie constitue une méthode de choix pour dissimuler et transmettre de l'information.Dans ce manuscrit, nous avons abordé deux problèmes : l'évaluation de la qualité d'image et la détection d'une modification ou la présence d'informations cachées dans une image. L'objectif dans un premier temps est de développer une mesure sans référence permettant d'évaluer de manière automatique la qualité d'une image en corrélation avec l'appréciation visuelle humaine. Ensuite proposer un outil de stéganalyse permettant de détecter, avec la meilleure fiabilité possible, la présence d'informations cachées dans des images naturelles. Dans le cadre de cette thèse, l'enjeu est de prendre en compte l'imperfection des données manipulées provenant de différentes sources d'information avec différents degrés de précision. Dans ce contexte, afin de profiter entièrement de l'ensemble de ces informations, nous proposons d'utiliser la théorie des fonctions de croyance. Cette théorie permet de représenter les connaissances d'une manière relativement naturelle sous la forme d'une structure de croyances. Nous avons proposé une nouvelle mesure sans référence d'évaluation de la qualité d'image capable d'estimer la qualité des images dégradées avec de multiple types de distorsion. Cette approche appelée wms-EVreg2 est basée sur la fusion de différentes caractéristiques statistiques, extraites de l'image, en fonction de la fiabilité de chaque ensemble de caractéristiques estimée à travers la matrice de confusion. À partir des différentes expérimentations, nous avons constaté que wms-EVreg2 présente une bonne corrélation avec les scores de qualité subjectifs et fournit des performances de prédiction de qualité compétitives par rapport aux mesures avec référence.Pour le deuxième problème abordé, nous avons proposé un schéma de stéganalyse basé sur la théorie des fonctions de croyance construit sur des sous-espaces aléatoires des caractéristiques. La performance de la méthode proposée a été évaluée sur différents algorithmes de dissimulation dans le domaine de transformé JPEG ainsi que dans le domaine spatial. Ces tests expérimentaux ont montré l'efficacité de la méthode proposée dans certains cadres d'applications. Cependant, il reste de nombreuses configurations qui résident indétectables. / Nowadays it has become increasingly simpler for anyone to take pictures with digital cameras, to download these images to the computer and to use different image processing software to apply modifications on these images (Compression, denoising, transmission, etc.). However, these treatments lead to degradations which affect the visual quality of the image. In addition, with the widespread use of the Internet and the growth of electronic mail, sophisticated image-editing software has been democratised allowing to falsify images for legitimate or malicious purposes for confidential or secret communications. In this context, steganography is a method of choice for embedding and transmitting information.In this manuscript we discussed two issues : the image quality assessment and the detection of modification or the presence of hidden information in an image. The first objective is to develop a No-Reference measure allowing to automatically evaluate the quality of an image in correlation with the human visual appreciation. Then we propose a steganalysis scheme to detect, with the best possible reliability, the presence of information embedded in natural images. In this thesis, the challenge is to take into account the imperfection of the manipulated data coming from different sources of information with different degrees of precision. In this context, in order to take full advantage of all this information, we propose to use the theory of belief functions. This theory makes it possible to represent knowledge in a relatively natural way in the form of a belief structure.We proposed a No-reference image quality assessment measure, which is able to estimate the quality of the degraded images with multiple types of distortion. This approach, called wms-EVreg2, is based on the fusion of different statistical features, extracted from the image, depending on the reliability of each set of features estimated through the confusion matrix. From the various experiments, we found that wms-EVreg2 has a good correlation with subjective quality scores and provides competitive quality prediction performance compared to Full-reference image quality measures.For the second problem addressed, we proposed a steganalysis scheme based on the theory of belief functions constructed on random subspaces of the features. The performance of the proposed method was evaluated on different steganography algorithms in the JPEG transform domain as well as in the spatial domain. These experimental tests have shown the performance of the proposed method in some application frameworks. However, there are many configurations that reside undetectable.

Page generated in 0.0726 seconds