• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 44
  • 20
  • 14
  • 11
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 291
  • 291
  • 69
  • 40
  • 32
  • 31
  • 30
  • 29
  • 29
  • 28
  • 26
  • 26
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Využití pokročilých objektivních kritérií hodnocení při kompresi obrazu / Advanced objective measurement criteria applied to image compression

Šimek, Josef January 2010 (has links)
This diploma thesis deals with the problem of using an objective quality assessment methods in image data compression. Lossy compression always introduces some kind of distortion into the processed data causing degradation in the quality of the image. The intensity of this distortion can be measured using subjective or objective methods. To be able to optimize compression algorithms the objective criteria are used. In this work the SSIM index as a useful tool for describing the quality of compressed images has been presented. Lossy compression scheme is realized using the wavelet transform and SPIHT algorithm. The modification of this algorithm using partitioning of the wavelet coefficients into the separate tree-preserving blocks followed by independent coding, which is especially suitable for parallel processing, was implemented. For the given compression ratio the traditional problem is being solved – how to allocate available bits among the spatial blocks to achieve the highest possible image quality. The possible approaches to achieve this solution were discussed. As a result, some methods for bit allocation based on MSSIM index were proposed. To test the effectivity of these methods the MATLAB environment was used.
152

Methods for transcriptome reconstruction, with an application in Picea abies (L.) H. Karst.

Westrin, Karl Johan January 2021 (has links)
Transcriptome reconstruction is an important component in the bioinformatical part of transcriptome studies. It is particulary interesting when a reference genome is missing, highly fragmented or incomplete, since in such situations, a simple alignment (or mapping) would not necessarily tell the full story. One species with such a highly fragmented reference genome is the Norway spruce (Picea abies (L.) H. Karst.) -- a conifer, which is very important for Swedish economy. Given its long juvenile phase and irregular cone setting, the demand of cultivated seeds are larger than the supply. This yields a desire to understand the transcriptomal biology behind the cone setting in P. abies. This thesis presents an introduction to this situation, and the biological and bioinformatical background in general, followed by two papers in which this is applied: Paper I introduces a novel de novo transcriptome assembler, with a focus on recovering isoforms, and paper II makes use of this assembler to be able to detect connections between scaffolds in the P. abies genome. Paper I also studies P. abies var acrocona, a mutant with shorter juvenile phase than the wild type, in order to detect how cone setting is initiated.  From differential expression studies of both mRNA and miRNA, a number of genes potentially involved in cone-setting in P. abies were found, and also a set of miRNAs that could be involved in their regulation. / Transkriptomrekonstruktion är en viktig komponent i den bioinformatiska delen av transkriptomstudier. Särskilt intressant är detta när ett referensgenom saknas, är kraftigt fragmenterat eller ofullständigt, ty i dessa situationer skulle inte en vanlig inpassning (eller mappning) kunna berätta allt. En art med ett kraftigt fragmenterat referensgenom är gran (Picea abies (L.) H. Karst.) -- ett barrträd, som är mycket viktigt för svensk ekonomi. På grund av dess långa uppväxtsfas och oregelbundna kottsättning, så är efterfrågan av förädlade fröer större än utbudet. Detta lämnar en önskan att förstå den transkriptomala biologin bakom granens kottsättning. Denna avhandling presenterar en introduktion till denna situation, den generella biologiska och bioinformatiska bakgrunden, följd av två artiklar i vilket detta är tillämpat: Artikel I introducerar en ny de novo transkriptomassembler med fokus på att återskapa isoformer, och artikel II tillämpar denna assembler för att kunna hitta länkar mellan scaffolder (genom-delar som hittills inte kunnat länkas med varandra) i grangenomet. Artikel II studerar även granmutanten acrocona (kottegran), vilken har kortare uppväxtsfas än vildtypen, för att kunna se vad som initierar kottsättning.  Från differentiella expressionsstudier av såväl mRNA som miRNA, hittades ett antal gener potentiellt involverade i granens kottsättning, samt några miRNA som kan vara involverade i dess reglering. / <p>QC 2021-02-12</p>
153

The use of hyperspectral sensors for quality assessment : A quantitative study of moisture content in indoor vertical farming

Ahaddi, Arezo, Al-Husseini, Zeineb January 2023 (has links)
Purpose: This research will study how hyperspectral sensoring can assess the moisture content of lettuce by monitoring its growth in indoor vertical farming. Research questions: “What accuracy can be achieved when using hyperspectral sensoring for assessing the moisture content of lettuce leaves grown in vertical farming?” “How can vertical farming contribute to sustainability in conjunction with integration of NIR spectroscopy?” Methodology: This study is an experimental study with a deductive approach in which experiments have been performed using the hyperspectral technologies singlespot sensor and the hyperspectral camera Specim FX17 to collect spectral data. To analyze the data from the experiments two regression models were used and trained to make it possible to predict future moisture content values in lettuce. In order to get a better understanding and analyze the results from the experiments, a literature review was also conducted on how hyperspectral imaging has been applied to assess the quality of food products. Conclusion: The achieved accuracies were 58.24 % and 65.54 % for the PLS regression model and the Neural Network model respectively. Employing hyperspectral sensoring as a non-destructive technique to assess the quality of food products grown and harvested in vertical farming systems, contributes to sustainability from several aspects such as reducing food waste, minimizing costs and detecting different quality attributes that affect the food products. / Syfte: Syftet med denna studie är att undersöka hur hyperspektral avbildning kan användas för att bedöma fuktigheten i sallad genom att kontrollera hur den växer i vertikal odling inomhus. Frågeställningar: “Vilken noggrannhet kan uppnås vid användning av hyperspektral avbildning för att bedöma fukthalt hos salladsblad som odlas i vertikal odling?” “Hur kan vertikal odling bidra till hållbarhet i kombination med integration av NIR spectroscopy?”  Metod: Denna studie är en experimentell studie med en kvantitativ metod inom vilken en deduktiv ansats har tillämpats genom användning av de hyperspektrala teknologierna single-spot sensor och hyperspektralkameran Specim FX17 för insamling av spektral data. För att analysera datan från experimenten skapades och tränades två olika regressionsmodeller till att möjliggöra förutsägning av framtida värden av fukthalt i sallad. För att få en bättre förståelse för och kunna göra en bättre analys av resultaten från experimenten, utfördes även en litteraturöversikt på vad tidigare forskning om tillämpningen av hyperspektral avbildning för kvalitetssäkring av matprodukter har visat. Slutsats: Noggrannheten för PLS-regressionsmodellen var 58,24 % och 65,54 % för Neural Network-modellen. Minskat matsvinn och kostnader samt upptäcka olika kvalitetsattribut som påverkar livsmedelsprodukterna är de hållbara resultaten vid bedömning av kvalitet via hyperspektral sensing.
154

Quality Assessment for Halftone Images

Elmèr, Johnny January 2023 (has links)
Halftones are reproductions of images created through the process of halftoning. The goal of halftones is to create a replica of an image which, at a distance, looks nearly identical to the original. Several different methods for producing these halftones are available, three of which are error diffusion, DBS and IMCDP. To check whether a halftone would be perceived as of high quality there are two options: Subjective image quality assessments (IQA’s) and objective image quality (IQ) measurements. As subjective IQA’s often take too much time and resources, objective IQ measurements are preferred. But as there is no standard for which metric should be used when working with halftones, this brings the question of which one to use. For this project both online and on-location subjective testing was performed where observers were tasked with ranking halftoned images based on perceived image quality, the images themselves being chosen specifically to show a wide range of characteristics such as brightness and level of detail. The results of these tests were compiled and then compared to that of eight different objective metrics, the list of which is the following: MSE, PSNR, S-CIELAB, SSIM, BlurMetric, BRISQUE, NIQE and PIQE. The subjective and objective results were compared using Z-scores and showed that SSIM and NIQE were the objective metrics which most closely resembled the subjective results. The online and on-location subjective tests differed greatly for dark colour halftones and colour halftones containing smooth transitions, with a smaller variation for the other categories chosen. What did not change was the clear preference for DBS by both the observers and the objective IQ metrics, making it the better of the three methods tested. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
155

Metadata Quality Assurance for Audiobooks: : An explorative case study on how to measure, identify and solve metadata quality issues

Carlsson, Patrik January 2023 (has links)
Metadata is essential to how (digital) archives, collections or databases operate. It is the backbone to organise different types of content, make them discoverable and keep the digital records’ authenticity, integrity and meaning over time. For that reason, it is also important to iteratively assess if the metadata is of high quality. Despite its importance, there is an acknowledged lack of research verifying if existing assessment frameworks and methodologies do indeed work and if so how well, especially in fields outside the libraries. Thus, this thesis conducted an exploratory case study and applied already existing frameworks in a new context by evaluating the metadata quality of audiobooks. The Information Continuum Model was used as a way to capture the metadata quality needs of customers/end users who will be searching and listening to audiobooks. Using a mixed methods approach, the results showed that the frameworks can indeed be generalised and adapted to a new context. However, although the frameworks helped measure, identify and find potential solutions to the problems, they could be better adjusted to the context and more metrics and information could be added. Thus, there can be a generalised method to assess metadata quality. But the method needs improvements and to be used by people who understand the data and the processes to reach its full potential.
156

The Effect of Wetland Size and Surrounding Land Use on Wetland Quality along an Urbanization Gradient in the Rocky River Watershed

Gunsch, Marilyn S. 29 October 2008 (has links)
No description available.
157

Image/video compression and quality assessment based on wavelet transform

Gao, Zhigang 14 September 2007 (has links)
No description available.
158

Dazai to Digital: Assessing Translation Accuracy of “Ningen Shikkaku" Across ChatGPT-4, Donald Keene, and Mark Gibeau

Malmqvist, Emilia January 2024 (has links)
This study assesses the translation accuracy of ChatGPT-4 against two human translators, Donald Keene and Mark Gibeau, focusing on the first 50 sentences of Osamu Dazai's Japanese novel "Ningen Shikkaku" translated into English. In the rapidly advancing field of artificial intelligence, where AI increasingly integrates into fields such as translation traditionally occupied by humans, it examines the effectiveness and reliability of AI incapturing both the literal and figurative meaning of a literary text. A significant gap in the field is the scarcity of comparative studies between AI and human translators, and all the more so in Japanese-English translation. Most existing research on AI translation focuses on European languages or evaluates AI against other machine translation tools. The study employs a translation quality assessment framework based on how erroneous the translations are, where either one or two points are deducted for each error depending on severity to evaluate the accuracy of each translation. The identified error types are grounded on the standardized error marking system utilized by the American Translators Association, and endeavors to provide an objective measure of translation quality. The results of the study show that ChatGPT-4's translation incurred the least number of point deductions, roughly half as many as those of Gibeau and Keene. Gibeau's translation rankedsecond in accuracy, with Keene's trailing closely behind. The results also reveal that Keene's translation errors typically stemmed from altered words and phrases, while Gibeau's translation rather added, intensified, or omitted elements. ChatGPT-4's translation had fewer errors overall, except in relation to literalness. It is discussed that the utility of AI in literary translation varies depending on whether accuracy or aesthetic is most valued. Nevertheless, translators can already at present utilize AI to manage routine tasks and accelerate translation processes, enabling them to concentrate on aspects such as flow, rhythm, and readability.
159

Model-based Tests for Standards Evaluation and Biological Assessments

Li, Zhengrong 27 September 2007 (has links)
Implementation of the Clean Water Act requires agencies to monitor aquatic sites on a regular basis and evaluate the quality of these sites. Sites are evaluated individually even though there may be numerous sites within a watershed. In some cases, sampling frequency is inadequate and the evaluation of site quality may have low reliability. This dissertation evaluates testing procedures for determination of site quality based on modelbased procedures that allow for other sites to contribute information to the data from the test site. Test procedures are described for situations that involve multiple measurements from sites within a region and single measurements when stressor information is available or when covariates are used to account for individual site differences. Tests based on analysis of variance methods are described for fixed effects and random effects models. The proposed model-based tests compare limits (tolerance limits or prediction limits) for the data with the known standard. When the sample size for the test site is small, using model-based tests improves the detection of impaired sites. The effects of sample size, heterogeneity of variance, and similarity between sites are discussed. Reference-based standards and corresponding evaluation of site quality are also considered. Regression-based tests provide methods for incorporating information from other sites when there is information on stressors or covariates. Extension of some of the methods to multivariate biological observations and stressors is also discussed. Redundancy analysis is used as a graphical method for describing the relationship between biological metrics and stressors. A clustering method for finding stressor-response relationships is presented and illustrated using data from the Mid-Atlantic Highlands. Multivariate elliptical and univariate regions for assessment of site quality are discussed. / Ph. D.
160

The development, assessment, and selection of questionnaires.

Pesudovs, Konrad, Burr, J.M., Harley, Clare, Elliott, David B. January 2007 (has links)
No / Patient-reported outcome measurement has become accepted as an important component of comprehensive outcomes research. Researchers wishing to use a patient-reported measure must either develop their own questionnaire (called an instrument in the research literature) or choose from the myriad of instruments previously reported. This article summarizes how previously developed instruments are best assessed using a systematic process and we propose a system of quality assessment so that clinicians and researchers can determine whether there exists an appropriately developed and validated instrument that matches their particular needs. These quality assessment criteria may also be useful to guide new instrument development and refinement. We welcome debate over the appropriateness of these criteria as this will lead to the evolution of better quality assessment criteria and in turn better assessment of patient-reported outcomes.

Page generated in 0.0351 seconds