• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 44
  • 20
  • 14
  • 11
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 69
  • 39
  • 32
  • 31
  • 30
  • 29
  • 29
  • 28
  • 26
  • 26
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Studies on the assessment of water quality using the hydroids Hydra littoralis Campanularia flexuosa

Santiago-Fandino, Vincente J. R. January 1989 (has links)
No description available.
2

A Strategy Oriented, Machine Learning Approach to Automatic Quality Assessment of Wikipedia Articles

De La Calzada, Gabriel 01 April 2009 (has links) (PDF)
This work discusses an approach to modeling and measuring information quality of Wikipedia articles. The approach is based on the idea that the quality of Wikipedia articles with distinctly different profiles needs to be measured using different information quality models. To implement this approach, a software framework written in the Java language was developed to collect and analyze information of Wikipedia articles. We report on our initial study, which involved two categories of Wikipedia articles: ”stabilized” (those, whose content has not undergone major changes for a significant period of time) and ”controversial” (articles that have undergone vandalism, revert wars, or whose content is subject to internal discussions between Wikipedia editors). In addition, we present simple information quality models and compare their performance on a subset of Wikipedia articles with the information quality evaluations provided by human users. Our experiment shows that using special-purpose models for information quality captures user sentiment about Wikipedia articles better than using a single model for both categories of articles.
3

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2009 (has links)
Measurement of visual quality is crucial for various image and video processing applications. It is widely applied in image acquisition, media transmission, video compression, image/video restoration, etc. The goal of image quality assessment (QA) is to develop a computable quality metric which is able to properly evaluate image quality. The primary criterion is better QA consistency with human judgment. Computational complexity and resource limitations are also concerns in a successful QA design. Many methods have been proposed up to now. At the beginning, quality measurements were directly taken from simple distance measurements, which refer to mathematically signal fidelity, such as mean squared error or Minkowsky distance. Lately, QA was extended to color space and the Fourier domain in which images are better represented. Some existing methods also consider the adaptive ability of human vision. Unfortunately, the Video Quality Experts Group indicated that none of the more sophisticated metrics showed any great advantage over other existing metrics. This thesis proposes a general approach to the QA problem by evaluating image information entropy. An information theoretic model for the human visual system is proposed and an information theoretic solution is presented to derive the proper settings. The quality metric is validated by five subjective databases from different research labs. The key points for a successful quality metric are investigated. During the testing, our quality metric exhibits excellent consistency with the human judgments and compatibility with different databases. Other than full reference quality assessment metric, blind quality assessment metrics are also proposed. In order to predict quality without a reference image, two concepts are introduced which quantitatively describe the inter-scale dependency under a multi-resolution framework. Based on the success of the full reference quality metric, several blind quality metrics are proposed for five different types of distortions in the subjective databases. Our blind metrics outperform all existing blind metrics and also are able to deal with some distortions which have not been investigated.
4

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2009 (has links)
Measurement of visual quality is crucial for various image and video processing applications. It is widely applied in image acquisition, media transmission, video compression, image/video restoration, etc. The goal of image quality assessment (QA) is to develop a computable quality metric which is able to properly evaluate image quality. The primary criterion is better QA consistency with human judgment. Computational complexity and resource limitations are also concerns in a successful QA design. Many methods have been proposed up to now. At the beginning, quality measurements were directly taken from simple distance measurements, which refer to mathematically signal fidelity, such as mean squared error or Minkowsky distance. Lately, QA was extended to color space and the Fourier domain in which images are better represented. Some existing methods also consider the adaptive ability of human vision. Unfortunately, the Video Quality Experts Group indicated that none of the more sophisticated metrics showed any great advantage over other existing metrics. This thesis proposes a general approach to the QA problem by evaluating image information entropy. An information theoretic model for the human visual system is proposed and an information theoretic solution is presented to derive the proper settings. The quality metric is validated by five subjective databases from different research labs. The key points for a successful quality metric are investigated. During the testing, our quality metric exhibits excellent consistency with the human judgments and compatibility with different databases. Other than full reference quality assessment metric, blind quality assessment metrics are also proposed. In order to predict quality without a reference image, two concepts are introduced which quantitatively describe the inter-scale dependency under a multi-resolution framework. Based on the success of the full reference quality metric, several blind quality metrics are proposed for five different types of distortions in the subjective databases. Our blind metrics outperform all existing blind metrics and also are able to deal with some distortions which have not been investigated.
5

Human perception in speech processing

Grancharov, Volodya January 2006 (has links)
The emergence of heterogeneous networks and the rapid increase of Voice over IP (VoIP) applications provide important opportunities for the telecommunications market. These opportunities come at the price of increased complexity in the monitoring of the quality of service (QoS) and the need for adaptation of transmission systems to the changing environmental conditions. This thesis contains three papers concerned with quality assessment and enhancement of speech communication systems in adverse environments. In paper A, we introduce a low-complexity, non-intrusive algorithm for monitoring speech quality over the network. In the proposed algorithm, speech quality is predicted from a set of features that capture important structural information from the speech signal. Papers B and C describe improvements in the conventional pre- and post-processing speech enhancement techniques. In paper B, we demonstrate that the causal Kalman filter implementation is in conflict with the key properties in human perception and propose solutions to the problem. In paper C, we propose adaptation of the conventional postfilter parameters to changes in the noisy conditions. A perceptually motivated distortion measure is used in the optimization of postfilter parameters. Significant improvement over nonadaptive system is obtained. / QC 20100824
6

Natural scene statistics-based blind visual quality assessment in the spatial domain

Mittal, Anish 07 November 2013 (has links)
With the launch of networked handheld devices which can capture, store, compress, send and display a variety of audiovisual stimuli; high definition television (HDTV); streaming Internet protocol TV (IPTV) and websites such as Youtube, Facebook and Flickr etc., an enormous amount of visual data of visual data is making its way to consumers. Because of this, considerable time and resources are being expanded to ensure that the end user is presented with with a satisfactory quality of experience (QoE). While traditional QoE methods have focused on optimizing delivery networks with respect to throughput, buffer-lengths and capacity, perceptually optimized delivery of multimedia services is also fast gaining importance. This is especially timely given the explosive growth in (especially wireless) video traffic and expected shortfalls in bandwidth. These perceptual approaches attempt to deliver an optimized QoE to the end-user by utilizing objective measures of visual quality. In this thesis, we shall cover a variety of such algorithms that predict overall QoE of an image or a video, depending on the amount of information available for the algorithm design. Typically, quality assessment (QA) algorithms are classiffied on the basis of the amount of information that is available to the algorithm. This thesis will primarily focus on blind QA algorithms, where blind or no-reference (NR) QA refers to automatic quality assessment of an image/video using an algorithm which only utilizes the distorted image/video whose quality is being assessed. NR QA approaches are further classiffied on the basis of whether the algorithm had access to subjective/human opinion prior to deployment. Algorithms which use machine learning techniques along with human judgements of quality during the 'training' phase may be labelled 'opinion aware' algorithms. The first part of the thesis deals with such approaches. While such opinion aware-NR algorithms demonstrate good correlation with human perception on controlled databases, it is impossible to anticipate all of the different distortions that may occur in a practical system and hence train on them. In such cases, it is of interest to design QA algorithms that are not limited in their performance by training data. Approaches which operate without the knowledge of human judgements during the training phase are labelled as 'opinion unaware' (OU) algorithms. We propose such an approach in the second part of the thesis. Further, we propose new VQA algorithms in the last part of the dissertation to address the completely blind VQA problem. The proposed approach quantify disturbances introduced due to distortions and thereby predict the quality of distorted content even without any external knowledge about the pristine natural sources and hence zero shot models. / text
7

The structure and strength of metallurgical coke

Moreland, Angela January 1990 (has links)
This study aimed to investigate the relationship between the tensile strength of metallurgical coke and both the textural composition of the carbon matrix and the porous structure of the coke, and further to assess the use of these structural features as bases of methods of coke strength prediction. The forty-four cokes examined were produced in a small pilot-oven from blended-coal charges based on six coals differing widely in rank. Their textural composition was assessed by incident polarized-light microscopy while pore structural parameters were measured by computerized image analysis allied to reflected light microscopy. The tensile strength of coke could be related to textural data with accuracy using several relationships, some of which were based on a model for the tensile failure of coke. Relationships between tensile strength and pore sturctural parameters were less successful, possibly because of difficulties associated with the measuring system used. Neverthless relationships involving combinations of pore structural and textural data were developed and investigated. It was shown that relationships between tensile strength and calculated textural data had promise as the basis of a method of coke strength prediction. Also, tensile strengths could be calculated from the blend composition and the tensile strength of the coke produced from component cokes. Both methods have value in different situations.
8

Quality assessment of English language programmes in Libyan universities : with reference to Tripoli University

Aldradi, Ibtesam January 2015 (has links)
This study examined the quality of English language programmes at Libyan universities and in particular at Tripoli University, in order to identify the factors that have contributed to the decline in standards of students studying English at degree level. The motivation behind selecting this topic area is that English language programme at Tripoli University is dated and not fit for purpose. Thus English programmes are in need of major changes to improve students’ language skills. There is a broad literature on the need for research on language programme evaluation across many parts of the world. Many educational systems and teaching institutions undertake periodic evaluation of their programmes. Many key authors agree on the importance of evaluation and argue that evaluation is more than just the collection of information and data, it involves making judgements about the worth, merit or value of a programme. Programme evaluation is also a form of validation process to find out if the assessed programme is fit for purpose and meets the students’ needs and expectations. This study adopted a mixed methods approach as relying on one single research approach and strategy would reduce the effectiveness of this study. The rationale for adopting a quantitative and qualitative research approach is related to the purpose of the study, the nature of the problem and research questions. Thus quantitative data were collected through questionnaires involving (300) students at Tripoli University (Libya) and was analysed using SPSS. This was supported by qualitative data using semi-structured interviews involving eight lecturers at Tripoli University using content analysis. The findings revealed that most of the students recognise the need for radical changes to revamp the language programme to address the decline of English language skills. Students are aware of their inadequate English standards, as the findings showed that a majority of students had positive attitudes and were highly motivated to learn the English language. The conclusions indicated that the English language programme has major shortcomings that need to be addressed such as resources, teaching and learning facilities, training workshops for staff development and insufficient library resources. The results also clarified that the English language programme needs to be evaluated on a regular basis in order to assess its effectiveness in order to enhance the quality of education. The study makes suggestions that will have implications for improvement and development for the English language programme. A framework is proposed to reform and revamp the English language programme. This study contributes to raise awareness regarding the importance of evaluating English language programmes, to allow decision-makers to take necessary steps to promote the English language. This study also makes a theoretical contribution by expanding the literature on the research topic which is Quality assessment English language programmes at Libyan Universities. It also raises awareness about the root causes of the decline of English language standards.
9

Blind Full Reference Quality Assessment of Poisson Image Denoising

Zhang, Chen 05 June 2014 (has links)
No description available.
10

A Machine Learning Approach to Genome Assessment

Thrash, Charles Adam 09 August 2019 (has links)
A key use of high throughput sequencing technology is the sequencing and assembly of full genome sequences. These genome assemblies are commonly assessed using statistics relating to contiguity of the assembly. Measures of contiguity are not strongly correlated with information about the biological completion or correctness of the assembly, and a commonly reported metric, N50, can be misleading. Over the past ten years, multiple research groups have rejected the overuse of N50 and sought to develop more informative metrics. This research seeks to create a ranking method that includes biologically relevant information about the genome, such as completeness and correctness of the genome. Approximately eight hundred genomes were initially selected, and information about their completeness, contiguity, and correctness was gathered using publicly available tools. Using this information, these genomes were scored by subject matter experts. This rating system was explored using supervised machine learning techniques. A number of classifiers and regressors were tested using cross validation. Two metrics were explored in this research. First, a metric that describes the distance to the ideal genome was created as a way to explore the incorporation of human subject matter expert knowledge into the genome assembly assessment process. Second, random forest regression was found to be the method of supervised learning with the highest scores. A model created by an optimized random forest regressor was saved, and a tool was created to load the saved model and rank genomes provided by the end user. These metrics both serve as ways to incorporate human subject matter expert knowledge into genome assembly assessment.

Page generated in 0.1036 seconds