• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 148
  • 44
  • 20
  • 14
  • 11
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 293
  • 293
  • 70
  • 40
  • 33
  • 32
  • 30
  • 29
  • 29
  • 29
  • 26
  • 26
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Preprocessing and analysis of environmental data : Application to the water quality assessment of Mexican rivers / Pré-traitement et analyse des données environnementales : application à l'évaluation de la qualité de l'eau des rivières mexicaines

Serrano Balderas, Eva Carmina 31 January 2017 (has links)
Les données acquises lors des surveillances environnementales peuvent être sujettes à différents types d'anomalies (i.e., données incomplètes, inconsistantes, inexactes ou aberrantes). Ces anomalies qui entachent la qualité des données environnementales peuvent avoir de graves conséquences lors de l'interprétation des résultats et l’évaluation des écosystèmes. Le choix des méthodes de prétraitement des données est alors crucial pour la validité des résultats d'analyses statistiques et il est assez mal défini. Pour étudier cette question, la thèse s'est concentrée sur l’acquisition des données et sur les protocoles de prétraitement des données afin de garantir la validité des résultats d'analyse des données, notamment dans le but de recommander la séquence de tâches de prétraitement la plus adaptée. Nous proposons de maîtriser l'intégralité du processus de production des données, de leur collecte sur le terrain et à leur analyse, et dans le cas de l'évaluation de la qualité de l'eau, il s’agit des étapes d'analyse chimique et hydrobiologique des échantillons produisant ainsi les données qui ont été par la suite analysées par un ensemble de méthodes statistiques et de fouille de données. En particulier, les contributions multidisciplinaires de la thèse sont : (1) en chimie de l'eau: une procédure méthodologique permettant de déterminer les quantités de pesticides organochlorés dans des échantillons d'eau collectés sur le terrain en utilisant les techniques SPE–GC-ECD (Solid Phase Extraction - Gas Chromatography - Electron Capture Detector) ; (2) en hydrobiologie : une procédure méthodologique pour évaluer la qualité de l’eau dans quatre rivières Mexicaines en utilisant des indicateurs biologiques basés sur des macroinvertébrés ; (3) en science des données : une méthode pour évaluer et guider le choix des procédures de prétraitement des données produites lors des deux précédentes étapes ainsi que leur analyse ; et enfin, (4) le développement d’un environnement analytique intégré sous la forme d’une application développée en R pour l’analyse statistique des données environnementales en général et l’analyse de la qualité de l’eau en particulier. Enfin, nous avons appliqué nos propositions sur le cas spécifique de l’évaluation de la qualité de l’eau des rivières Mexicaines Tula, Tamazula, Humaya et Culiacan dans le cadre de cette thèse qui a été menée en partie au Mexique et en France. / Data obtained from environmental surveys may be prone to have different anomalies (i.e., incomplete, inconsistent, inaccurate or outlying data). These anomalies affect the quality of environmental data and can have considerable consequences when assessing environmental ecosystems. Selection of data preprocessing procedures is crucial to validate the results of statistical analysis however, such selection is badly defined. To address this question, the thesis focused on data acquisition and data preprocessing protocols in order to ensure the validity of the results of data analysis mainly, to recommend the most suitable sequence of preprocessing tasks. We propose to control every step in the data production process, from their collection on the field to their analysis. In the case of water quality assessment, it comes to the steps of chemical and hydrobiological analysis of samples producing data that were subsequently analyzed by a set of statistical and data mining methods. The multidisciplinary contributions of the thesis are: (1) in environmental chemistry: a methodological procedure to determine the content of organochlorine pesticides in water samples using the SPE-GC-ECD (Solid Phase Extraction – Gas Chromatography – Electron Capture Detector) techniques; (2) in hydrobiology: a methodological procedure to assess the quality of water on four Mexican rivers using macroinvertebrates-based biological indices; (3) in data sciences: a method to assess and guide on the selection of preprocessing procedures for data produced from the two previous steps as well as their analysis; and (4) the development of a fully integrated analytics environment in R for statistical analysis of environmental data in general, and for water quality data analytics, in particular. Finally, within the context of this thesis that was developed between Mexico and France, we have applied our methodological approaches on the specific case of water quality assessment of the Mexican rivers Tula, Tamazula, Humaya and Culiacan.
212

Effect of Education on Adult Sepsis Quality Metrics In Critical Care Transport

Schano, Gregory R. 21 June 2019 (has links)
No description available.
213

Comparative Study of the Inference of an Image Quality Assessment Algorithm : Inference Benchmarking of an Image Quality Assessment Algorithm hosted on Cloud Architectures / En Jämförande Studie av Inferensen av en Bildkvalitetsbedömningsalgoritm : Inferens Benchmark av en Bildkvalitetsbedömingsalgoritm i olika Molnarkitekturer

Petersson, Jesper January 2023 (has links)
an instance has become exceedingly more time and resource consuming. To solve this issue, cloud computing is being used to train and serve the models. However, there’s a gap in research where these cloud computing platforms have been evaluated for these tasks. This thesis aims to investigate the inference task of an image quality assessment algorithm on different Machine Learning as a Service architecture. The quantitative metrics that are being used for the comparison are latency, inference time, throughput, carbon Footprint, and cost. The utilization of Machine Learning has a wide range of applications, with one of its most popular areas being Image Recognition or Image Classification. To effectively classify an image, it is imperative that the image is of high quality. This requirement is not always met, particularly in situations where users capture images through their mobile devices or other equipment. In light of this, there is a need for an image quality assessment, which can be achieved through the implementation of an Image Quality Assessment Model such as BRISQUE. When hosting BRISQUE in the cloud, there is a plethora of hardware options to choose from. This thesis aims to conduct a benchmark of these hardware options to evaluate the performance and sustainability of BRISQUE’s image quality assessment on various cloud hardware. The metrics for evaluation include inference time, hourly cost, effective cost, energy consumption, and emissions. Additionally, this thesis seeks to investigate the feasibility of incorporating sustainability metrics, such as energy consumption and emissions, into machine learning benchmarks in cloud environments. The results of the study reveal that the instance type from GCP was generally the best-performing among the 15 tested. The Image Quality Assessment Model appeared to benefit more from a higher number of cores than a high CPU clock speed. In terms of sustainability, it was observed that all instance types displayed a similar level of energy consumption, however, there were variations in emissions. Further analysis revealed that the selection of region played a significant role in determining the level of emissions produced by the cloud environment. However, the availability of such sustainability data is limited in a cloud environment due to restrictions imposed by cloud providers, rendering the inclusion of these metrics in Machine Learning benchmarks in cloud environments problematic. / Maskininlärning kan användas till en mängd olika saker. Ett populärt verksamhetsområde inom maskininlärning är bildigenkänning eller bildklassificering. För att utföra bildklassificering på en bild krävs först en bild av god kvalitet. Detta är inte alltid fallet när användare tar bilder i en applikation med sina telefoner eller andra enheter. Därför är behovet av en bildkvalitetskontroll nödvändigt. BRISQUE är en modell för bildkvalitetsbedömning som gör bildkvalitetskontroller på bilder, men när man hyr plats för den i molnet finns det många olika hårdvarualternativ att välja mellan. Denna uppsats avser att benchmarka denna hårdvara för att se hur BRISQUE utför inferens på dessa molnhårdvaror både när det gäller prestanda och hållbarhet där inferensens tid, timpris, effektivt pris, energiförbrukning och utsläpp är de insamlade mätvärdena. Avhandlingen söker också att undersöka möjligheten att inkludera hållbarhetsmetriker som energiförbrukning och utsläpp i en maskininlärningsbenchmark i molnmiljöer. Resultaten av studien visar att en av GCPs instanstyper var generellt den bäst presterande bland de 15 som testades. Bildkvalitetsbedömningsmodellen verkar dra nytta av ett högre antal kärnor mer än en hög CPU-frekvens. Vad gäller hållbarhet observerades att alla instanstyper visade en liknande nivå av energianvändning, men det fanns variationer i utsläpp. Ytterligare analys visade att valet av region hade en betydande roll i bestämningen av nivån av utsläpp som producerades av molnmiljön. Tillgången till sådana hållbarhetsdata är begränsade i en molnmiljö på grund av restriktioner som ställs av molnleverantörer vilket skapar problem om dessa mätvärden ska inkluderas i maskininlärningsbenchmarks i molnmiljöer.
214

Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images

Youmaran, Richard January 2011 (has links)
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
215

Realtidsövervakning av multicastvideoström / Monitoring of multicast video streaming in realtime

Hassan, Waleed, Hellström, Martin January 2017 (has links)
Den enorma ökningen av multicasttjänster har visat begränsningarna hos traditionella nätverkshanteringsverktyg vid multicastkvalitetsövervakning. Det behövs någon annan form av övervakningsteknik som inte är en hårdvaruinriktad lösning så som ökad länkgenomströmmning, buffertlängd och kapacitet för att förbättra kundupplevelsen. I rapporten undersöks användningen av biblioteken FFmpeg, och OpenCV samt no-reference image quality assessemnt algoritmen BRISQUE för att förbättra tjänstekvaliteten och kundupplevelsen. Genom att upptäcka kvalitetsbrister hos bildrutor samt bitfel i videoströmmen kan QoS och QoE förbättras. Uppgiftens ändamål är att i realtid detektera avvikelser i bildkvalitet och bitfel i en multicastvideoström för att sedan notifiera tjänsteleverantören med hjälp av SNMP traps. Undersökningen visar positiva resultat med en hybridlösning med användning av både BRISQUE och FFmpeg då båda ensamma inte är tillräckligt anpassade för multimediaövervakning. FFmpeg har möjligheten att detektera avkodningsfel som oftast beror på allvarliga bitfel, och BRISQUE algoritmen utvecklades för att analysera bilder och bestämma bildkvaliteten. Enligt testresultaten kan BRISQUE användas för multicastvideoanalysering eftersom att den subjektiva bildkvaliteten kan bestämmas med god pålitlighet. Kombinationen av dessa metoder har visat bra resultat men behöver undersökas mer för användning av multicastövervakning. / The enormous increase in multicast services has shown the limitations of traditional network management tools in multicast quality monitoring. There is a need for new monitoring technologies that are not hardware-based solutions such as increased link throughput, buffer length and capacity to enhance the quality of experience. This paper examines the use of FFmpeg, and OpenCV as well the no-reference image quality assessment algorithm BRISQUE to improve the quality of service as well as the quality of experience. By detecting image quality deficiencies as well as bit errors in the video stream, the QoS and QoE can be improved. The purpose of this project was to develop a monitoring system that has the ability to detect fluctuations in image quality and bit errors in a multicast video stream in real time and then notify the service provider using SNMP traps. The tests performed in this paper shows positive results when using the hybrid solution proposed in this paper, both BRISQUE and FFmpeg alone are not sufficiently adapted for this purpose. FFmpeg has the ability to detect decoding errors that usually occurs due to serious bit errors and the BRISQUE algorithm was developed to analyse images and determine the subjective image quality. According to the test results BRISQUE can be used for multicast video analysis because the subjective image quality can be determined with good reliability. The combination of these methods has shown good results but needs to be investigated and developed further.
216

Reviewing the Quality of Mixed Methods Research Reporting in Comparative and International Education: A Mixed Methods Research Synthesis

Neequaye, Beryl Koteikor 23 September 2019 (has links)
No description available.
217

Image-based Machine Learning Applications in Nitrate Sensor Quality Assessment and Inkjet Print Quality Stability

Qingyu Yang (6634961) 21 December 2022 (has links)
<p>An on-line quality assessment system in the industry is essential to prevent artifacts and guide manufacturing processes. Some well-developed systems can diagnose problems and help control the output qualities. However, some of the conventional methods are limited in time consumption and cost of expensive human labor. So, more efficient solutions are needed to guide future decisions and improve productivity. This thesis focuses on developing two image-based machine learning systems to accelerate the manufacturing process: one is to benefit nitrate sensor fabrication, and the other is to help image quality control for inkjet printers.</p> <p><br></p> <p>In the first work, we propose a system for predicting the nitrate sensor's performance based on non-contact images. Nitrate sensors are commonly used to reflect the nitrate levels of soil conditions in agriculture. In a roll-to-roll system, for manufacturing thin-film nitrate sensors, varying characteristics of the ion-selective membrane on screen-printed electrodes are inevitable and affect sensor performance. It is essential to monitor the sensor performance in real-time to guarantee the quality of the sensor. We also develop a system for predicting the sensor performance in on-line scenarios and making the neural networks efficiently adapt to the new data.</p> <p><br></p> <p>Streaks are the number one image quality problem in inkjet printers. In the second work, we focus on developing an efficient method to model and predict missing jets, which is the main contributor to streaks. In inkjet printing, the missing jets typically increase over printing time, and the print head needs to be purged frequently to recover missing jets and maintain print quality. We leverage machine learning techniques for developing spatio-temporal models to predict when and where the missing jets are likely to occur. The prediction system helps the inkjet printers make more intelligent decisions during customer jobs. In addition, we propose another system that will automatically identify missing jet patterns from a large-scale database that can be used in a diagnostic system to identify potential failures.</p>
218

Identification and Classification of TTS Intelligibility Errors Using ASR : A Method for Automatic Evaluation of Speech Intelligibility / Identifiering och klassifiering av fel relaterade till begriplighet inom talsyntes. : Ett förslag på en metod för automatisk utvärdering av begriplighet av tal.

Henriksson, Erik January 2023 (has links)
In recent years, applications using synthesized speech have become more numerous and publicly available. As the area grows, so does the need for delivering high-quality, intelligible speech, and subsequently the need for effective methods of assessing the intelligibility of synthesized speech. The common method of evaluating speech using human listeners has the disadvantages of being costly and time-inefficient. Because of this, alternative methods of evaluating speech automatically, using automatic speech recognition (ASR) models, have been introduced. This thesis presents an evaluation system that analyses the intelligibility of synthesized speech using automatic speech recognition, and attempts to identify and categorize the intelligibility errors present in the speech. This system is put through evaluation using two experiments. The first uses publicly available sentences and corresponding synthesized speech, and the second uses publicly available models to synthesize speech for evaluation. Additionally, a survey is conducted where human transcriptions are used instead of automatic speech recognition, and the resulting intelligibility evaluations are compared with those based on automatic speech recognition transcriptions. Results show that this system can be used to evaluate the intelligibility of a model, as well as identify and classify intelligibility errors. It is shown that a combination of automatic speech recognition models can lead to more robust and reliable evaluations, and that reference human recordings can be used to further increase confidence. The evaluation scores show a good correlation with human evaluations, while certain automatic speech recognition models are shown to have a stronger correlation with human evaluations. This research shows that automatic speech recognition can be used to produce a reliable and detailed analysis of text-to-speech intelligibility, which has the potential of making text-to-speech (TTS) improvements more efficient and allowing for the delivery of better text-to-speech models at a faster rate. / Under de senaste åren har antalet applikationer som använder syntetiskt tal ökat och blivit mer tillgängliga för allmänheten. I takt med att området växer ökar också behovet av att leverera tal av hög kvalitet och tydlighet, och därmed behovet av effektiva metoder för att bedöma förståeligheten hos syntetiskt tal. Den vanliga metoden att utvärdera tal med hjälp av mänskliga lyssnare har nackdelarna att den är kostsam och tidskrävande. Av den anledningen har alternativa metoder för att automatiskt utvärdera tal med hjälp av automatiska taligenkänningsmodeller introducerats. I denna avhandling presenteras ett utvärderingssystem som analyserar förståeligheten hos syntetiskt tal med hjälp av automatisk taligenkänning och försöker identifiera och kategorisera de fel i förståelighet som finns i talet. Detta system genomgår sedan utvärdering genom två experiment. Det första experimentet använder offentligt tillgängliga meningar och motsvarande ljudfiler med syntetiskt tal, och det andra använder offentligt tillgängliga modeller för att syntetisera tal för utvärdering. Dessutom genomförs en enkätundersökning där mänskliga transkriptioner används istället för automatisk taligenkänning. De resulterande bedömningarna av förståelighet jämförs sedan med bedömningar baserade på transkriptioner producerade med automatisk taligenkänning. Resultaten visar att utvärderingen som utförs av detta system kan användas för att bedöma förståeligheten hos en talsyntesmodell samt identifiera och kategorisera fel i förståelighet. Det visas att en kombination av automatiska taligenkänningsmodeller kan leda till mer robusta och tillförlitliga utvärderingar, och att referensinspelningar av mänskligt tal kan användas för att ytterligare öka tillförlitligheten. Utvärderingsresultaten visar en god korrelation med mänskliga utvärderingar, medan vissa automatiska taligenkänningsmodeller visar sig ha en starkare korrelation med mänskliga utvärderingar. Denna forskning visar att automatisk taligenkänning kan användas för att producera pålitlig och detaljerad analys av förståeligheten hos talsyntes, vilket har potentialen att göra förbättringar inom talsyntes mer effektiva och möjliggöra leverans av bättre talsyntes-modeller i snabbare takt.
219

An Accelerated General Purpose No-Reference Image Quality Assessment Metric and an Image Fusion Technique

Hettiarachchi, Don Lahiru Nirmal Manikka 09 September 2016 (has links)
No description available.
220

Comparison of Internal Synchronous Phantomless and Phantom-Based Volumetric Bone Mineral Density Calibration throughout the Human Body

Haverfield, Zachary A. January 2021 (has links)
No description available.

Page generated in 0.0642 seconds