• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 22
  • 9
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 133
  • 28
  • 27
  • 20
  • 17
  • 16
  • 16
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Cost-Aware Machine Learning and Deep Learning for Extremely Imbalanced Data

Ahmed, Jishan 11 August 2023 (has links)
No description available.
122

A Comparison of Microarray Analyses: A Mixed Models Approach Versus the Significance Analysis of Microarrays

Stephens, Nathan Wallace 20 November 2006 (has links) (PDF)
DNA microarrays are a relatively new technology for assessing the expression levels of thousands of genes simultaneously. Researchers hope to find genes that are differentially expressed by hybridizing cDNA from known treatment sources with various genes spotted on the microarrays. The large number of tests involved in analyzing microarrays has raised new questions in multiple testing. Several approaches for identifying differentially expressed genes have been proposed. This paper considers two: (1) a mixed models approach, and (2) the Signiffcance Analysis of Microarrays.
123

Vehicle Collision Risk Prediction Using a Dynamic Bayesian Network / Förutsägelse av kollisionsrisk för fordon med ett dynamiskt Bayesianskt nätverk

Lindberg, Jonas, Wolfert Källman, Isak January 2020 (has links)
This thesis tackles the problem of predicting the collision risk for vehicles driving in complex traffic scenes for a few seconds into the future. The method is based on previous research using dynamic Bayesian networks to represent the state of the system. Common risk prediction methods are often categorized into three different groups depending on their abstraction level. The most complex of these are interaction-aware models which take driver interactions into account. These models often suffer from high computational complexity which is a key limitation in practical use. The model studied in this work takes interactions between drivers into account by considering driver intentions and the traffic rules in the scene. The state of the traffic scene used in the model contains the physical state of vehicles, the intentions of drivers and the expected behaviour of drivers according to the traffic rules. To allow for real-time risk assessment, an approximate inference of the state given the noisy sensor measurements is done using sequential importance resampling. Two different measures of risk are studied. The first is based on driver intentions not matching the expected maneuver, which in turn could lead to a dangerous situation. The second measure is based on a trajectory prediction step and uses the two measures time to collision (TTC) and time to critical collision probability (TTCCP). The implemented model can be applied in complex traffic scenarios with numerous participants. In this work, we focus on intersection and roundabout scenarios. The model is tested on simulated and real data from these scenarios. %Simulations of these scenarios is used to test the model. In these qualitative tests, the model was able to correctly identify collisions a few seconds before they occur and is also able to avoid false positives by detecting the vehicles that will give way. / Detta arbete behandlar problemet att förutsäga kollisionsrisken för fordon som kör i komplexa trafikscenarier för några sekunder i framtiden. Metoden är baserad på tidigare forskning där dynamiska Bayesianska nätverk används för att representera systemets tillstånd. Vanliga riskprognosmetoder kategoriseras ofta i tre olika grupper beroende på deras abstraktionsnivå. De mest komplexa av dessa är interaktionsmedvetna modeller som tar hänsyn till förarnas interaktioner. Dessa modeller lider ofta av hög beräkningskomplexitet, vilket är en svår begränsning när det kommer till praktisk användning. Modellen som studeras i detta arbete tar hänsyn till interaktioner mellan förare genom att beakta förarnas avsikter och trafikreglerna i scenen. Tillståndet i trafikscenen som används i modellen innehåller fordonets fysiska tillstånd, förarnas avsikter och förarnas förväntade beteende enligt trafikreglerna. För att möjliggöra riskbedömning i realtid görs en approximativ inferens av tillståndet givet den brusiga sensordatan med hjälp av sekventiell vägd simulering. Två olika mått på risk studeras. Det första är baserat på förarnas avsikter, närmare bestämt att ta reda på om de inte överensstämmer med den förväntade manövern, vilket då skulle kunna leda till en farlig situation. Det andra riskmåttet är baserat på ett prediktionssteg som använder sig av time to collision (TTC) och time to critical collision probability (TTCCP). Den implementerade modellen kan tillämpas i komplexa trafikscenarier med många fordon. I detta arbete fokuserar vi på scerarier i korsningar och rondeller. Modellen testas på simulerad och verklig data från dessa scenarier. I dessa kvalitativa tester kunde modellen korrekt identifiera kollisioner några få sekunder innan de inträffade. Den kunde också undvika falsklarm genom att lista ut vilka fordon som kommer att lämna företräde.
124

Design and assessment of a computer-assisted artificial intelligence system for predicting preterm labor in women attending regular check-ups. Emphasis in imbalance data learning technique

Nieto del Amor, Félix 18 December 2023 (has links)
Tesis por compendio / [ES] El parto prematuro, definido como el nacimiento antes de las 37 semanas de gestación, es una importante preocupación mundial con implicaciones para la salud de los recién nacidos y los costes económicos. Afecta aproximadamente al 11% de todos los nacimientos, lo que supone más de 15 millones de individuos en todo el mundo. Los métodos actuales para predecir el parto prematuro carecen de precisión, lo que conduce a un sobrediagnóstico y a una viabilidad limitada en entornos clínicos. La electrohisterografía (EHG) ha surgido como una alternativa prometedora al proporcionar información relevante sobre la electrofisiología uterina. Sin embargo, los sistemas de predicción anteriores basados en EHG no se han trasladado de forma efectiva a la práctica clínica, debido principalmente a los sesgos en el manejo de datos desbalanceados y a la necesidad de modelos de predicción robustos y generalizables. Esta tesis doctoral pretende desarrollar un sistema de predicción del parto prematuro basado en inteligencia artificial utilizando EHG y datos obstétricos de mujeres sometidas a controles prenatales regulares. Este sistema implica la extracción de características relevantes, la optimización del subespacio de características y la evaluación de estrategias para abordar el reto de los datos desbalanceados para una predicción robusta. El estudio valida la eficacia de las características temporales, espectrales y no lineales para distinguir entre casos de parto prematuro y a término. Las nuevas medidas de entropía, en concreto la dispersión y la entropía de burbuja, superan a las métricas de entropía tradicionales en la identificación del parto prematuro. Además, el estudio trata de maximizar la información complementaria al tiempo que minimiza la redundancia y las características de ruido para optimizar el subespacio de características para una predicción precisa del parto prematuro mediante un algoritmo genético. Además, se ha confirmado la fuga de información entre el conjunto de datos de entrenamiento y el de prueba al generar muestras sintéticas antes de la partición de datos, lo que da lugar a una capacidad de generalización sobreestimada del sistema predictor. Estos resultados subrayan la importancia de particionar y después remuestrear para garantizar la independencia de los datos entre las muestras de entrenamiento y de prueba. Se propone combinar el algoritmo genético y el remuestreo en la misma iteración para hacer frente al desequilibrio en el aprendizaje de los datos mediante el enfoque de particio'n-remuestreo, logrando un área bajo la curva ROC del 94% y una precisión media del 84%. Además, el modelo demuestra un F1-score y una sensibilidad de aproximadamente el 80%, superando a los estudios existentes que consideran el enfoque de remuestreo después de particionar. Esto revela el potencial de un sistema de predicción de parto prematuro basado en EHG, permitiendo estrategias orientadas al paciente para mejorar la prevención del parto prematuro, el bienestar materno-fetal y la gestión óptima de los recursos hospitalarios. En general, esta tesis doctoral proporciona a los clínicos herramientas valiosas para la toma de decisiones en escenarios de riesgo materno-fetal de parto prematuro. Permite a los clínicos diseñar estrategias orientadas al paciente para mejorar la prevención y el manejo del parto prematuro. La metodología propuesta es prometedora para el desarrollo de un sistema integrado de predicción del parto prematuro que pueda mejorar la planificación del embarazo, optimizar la asignación de recursos y reducir el riesgo de parto prematuro. / [CA] El part prematur, definit com el naixement abans de les 37 setmanes de gestacio', e's una important preocupacio' mundial amb implicacions per a la salut dels nounats i els costos econo¿mics. Afecta aproximadament a l'11% de tots els naixements, la qual cosa suposa me's de 15 milions d'individus a tot el mo'n. Els me¿todes actuals per a predir el part prematur manquen de precisio', la qual cosa condueix a un sobrediagno¿stic i a una viabilitat limitada en entorns cl¿'nics. La electrohisterografia (EHG) ha sorgit com una alternativa prometedora en proporcionar informacio' rellevant sobre l'electrofisiologia uterina. No obstant aixo¿, els sistemes de prediccio' anteriors basats en EHG no s'han traslladat de manera efectiva a la pra¿ctica cl¿'nica, degut principalment als biaixos en el maneig de dades desequilibrades i a la necessitat de models de prediccio' robustos i generalitzables. Aquesta tesi doctoral prete'n desenvolupar un sistema de prediccio' del part prematur basat en intel·lige¿ncia artificial utilitzant EHG i dades obste¿triques de dones sotmeses a controls prenatals regulars. Aquest sistema implica l'extraccio' de caracter¿'stiques rellevants, l'optimitzacio' del subespai de caracter¿'stiques i l'avaluacio' d'estrate¿gies per a abordar el repte de les dades desequilibrades per a una prediccio' robusta. L'estudi valguda l'efica¿cia de les caracter¿'stiques temporals, espectrals i no lineals per a distingir entre casos de part prematur i a terme. Les noves mesures d'entropia, en concret la dispersio' i l'entropia de bambolla, superen a les me¿triques d'entropia tradicionals en la identificacio' del part prematur. A me's, l'estudi tracta de maximitzar la informacio' complementa¿ria al mateix temps que minimitza la redunda¿ncia i les caracter¿'stiques de soroll per a optimitzar el subespai de caracter¿'stiques per a una prediccio' precisa del part prematur mitjan¿cant un algorisme gene¿tic. A me's, hem confirmat la fugida d'informacio' entre el conjunt de dades d'entrenament i el de prova en generar mostres sinte¿tiques abans de la particio' de dades, la qual cosa dona lloc a una capacitat de generalitzacio' sobreestimada del sistema predictor. Aquests resultats subratllen la importa¿ncia de particionar i despre's remostrejar per a garantir la independe¿ncia de les dades entre les mostres d'entrenament i de prova. Proposem combinar l'algorisme gene¿tic i el remostreig en la mateixa iteracio' per a fer front al desequilibri en l'aprenentatge de les dades mitjan¿cant l'enfocament de particio'-remostrege, aconseguint una a¿rea sota la corba ROC del 94% i una precisio' mitjana del 84%. A me's, el model demostra una puntuacio' F1 i una sensibilitat d'aproximadament el 80%, superant als estudis existents que consideren l'enfocament de remostreig despre's de particionar. Aixo¿ revela el potencial d'un sistema de prediccio' de part prematur basat en EHG, permetent estrate¿gies orientades al pacient per a millorar la prevencio' del part prematur, el benestar matern-fetal i la gestio' o¿ptima dels recursos hospitalaris. En general, aquesta tesi doctoral proporciona als cl¿'nics eines valuoses per a la presa de decisions en escenaris de risc matern-fetal de part prematur. Permet als cl¿'nics dissenyar estrate¿gies orientades al pacient per a millorar la prevencio' i el maneig del part prematur. La metodologia proposada e's prometedora per al desenvolupament d'un sistema integrat de prediccio' del part prematur que puga millorar la planificacio' de l'embara¿s, optimitzar l'assignacio' de recursos i millorar la qualitat de l'atencio'. / [EN] Preterm delivery, defined as birth before 37 weeks of gestation, is a significant global concern with implications for the health of newborns and economic costs. It affects approximately 11% of all births, amounting to more than 15 million individuals worldwide. Current methods for predicting preterm labor lack precision, leading to overdiagnosis and limited practicality in clinical settings. Electrohysterography (EHG) has emerged as a promising alternative by providing relevant information about uterine electrophysiology. However, previous prediction systems based on EHG have not effectively translated into clinical practice, primarily due to biases in handling imbalanced data and the need for robust and generalizable prediction models. This doctoral thesis aims to develop an artificial intelligence based preterm labor prediction system using EHG and obstetric data from women undergoing regular prenatal check-ups. This system entails extracting relevant features, optimizing the feature subspace, and evaluating strategies to address the imbalanced data challenge for robust prediction. The study validates the effectiveness of temporal, spectral, and non-linear features in distinguishing between preterm and term labor cases. Novel entropy measures, namely dispersion and bubble entropy, outperform traditional entropy metrics in identifying preterm labor. Additionally, the study seeks to maximize complementary information while minimizing redundancy and noise features to optimize the feature subspace for accurate preterm delivery prediction by a genetic algorithm. Furthermore, we have confirmed leakage information between train and test data set when generating synthetic samples before data partitioning giving rise to an overestimated generalization capability of the predictor system. These results emphasize the importance of using partitioning-resampling techniques for ensuring data independence between train and test samples. We propose to combine genetic algorithm and resampling method at the same iteration to deal with imbalanced data learning using partition-resampling pipeline, achieving an Area Under the ROC Curve of 94% and Average Precision of 84%. Moreover, the model demonstrates an F1-score and recall of approximately 80%, outperforming existing studies on partition-resampling pipeline. This finding reveals the potential of an EHG-based preterm birth prediction system, enabling patient-oriented strategies for enhanced preterm labor prevention, maternal-fetal well-being, and optimal hospital resource management. Overall, this doctoral thesis provides clinicians with valuable tools for decision-making in preterm labor maternal-fetal risk scenarios. It enables clinicians to design a patient-oriented strategies for enhanced preterm birth prevention and management. The proposed methodology holds promise for the development of an integrated preterm birth prediction system that can enhance pregnancy planning, optimize resource allocation, and ultimately improve the outcomes for both mother and baby. / Nieto Del Amor, F. (2023). Design and assessment of a computer-assisted artificial intelligence system for predicting preterm labor in women attending regular check-ups. Emphasis in imbalance data learning technique [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/200900 / Compendio
125

Expeditious Causal Inference for Big Observational Data

Yumin Zhang (13163253) 28 July 2022 (has links)
<p>This dissertation address two significant challenges in the causal inference workflow for Big Observational Data. The first is designing Big Observational Data with high-dimensional and heterogeneous covariates. The second is performing uncertainty quantification for estimates of causal estimands that are obtained from the application of black box machine learning algorithms on the designed Big Observational Data. The methodologies developed by addressing these challenges are applied for the design and analysis of Big Observational Data from a large public university in the United States. </p> <h4>Distributed Design</h4> <p>A fundamental issue in causal inference for Big Observational Data is confounding due to covariate imbalances between treatment groups. This can be addressed by designing the study prior to analysis. The design ensures that subjects in the different treatment groups that have comparable covariates are subclassified or matched together. Analyzing such a designed study helps to reduce biases arising from the confounding of covariates with treatment. Existing design methods, developed for traditional observational studies consisting of a single designer, can yield unsatisfactory designs with sub-optimum covariate balance for Big Observational Data due to their inability to accommodate the massive dimensionality, heterogeneity, and volume of the Big Data. We propose a new framework for the distributed design of Big Observational Data amongst collaborative designers. Our framework first assigns subsets of the high-dimensional and heterogeneous covariates to multiple designers. The designers then summarize their covariates into lower-dimensional quantities, share their summaries with the others, and design the study in parallel based on their assigned covariates and the summaries they receive. The final design is selected by comparing balance measures for all covariates across the candidates and identifying the best amongst the candidates. We perform simulation studies and analyze datasets from the 2016 Atlantic Causal Inference Conference Data Challenge to demonstrate the flexibility and power of our framework for constructing designs with good covariate balance from Big Observational Data.</p> <h4>Designed Bootstrap</h4> <p>The combination of modern machine learning algorithms with the nonparametric bootstrap can enable effective predictions and inferences on Big Observational Data. An increasingly prominent and critical objective in such analyses is to draw causal inferences from the Big Observational Data. A fundamental step in addressing this objective is to design the observational study prior to the application of machine learning algorithms. However, the application of the traditional nonparametric bootstrap on Big Observational Data requires excessive computational efforts. This is because every bootstrap sample would need to be re-designed under the traditional approach, which can be prohibitive in practice. We propose a design-based bootstrap for deriving causal inferences with reduced bias from the application of machine learning algorithms on Big Observational Data. Our bootstrap procedure operates by resampling from the original designed observational study. It eliminates the need for additional, costly design steps on each bootstrap sample that are performed under the standard nonparametric bootstrap. We demonstrate the computational efficiency of this procedure compared to the traditional nonparametric bootstrap, and its equivalency in terms of confidence interval coverage rates for the average treatment effects, by means of simulation studies and a real-life case study.</p> <h4>Case Study</h4> <p>We apply the distributed design and designed bootstrap methodologies in a case study involving institutional data from a large public university in the United States. The institutional data contains comprehensive information about the undergraduate students in the university, ranging from their academic records to on-campus activities. We study the causal effects of undergraduate students’ attempted course load on their academic performance based on a selection of covariates from these data. Ultimately, our real-life case study demonstrates how our methodologies enable researchers to effectively use straightforward design procedures to obtain valid causal inferences with reduced computational efforts from the application of machine learning algorithms on Big Observational Data.</p> <p><br></p>
126

Newsvendor Models With Monte Carlo Sampling

Ekwegh, Ijeoma W 01 August 2016 (has links)
Newsvendor Models with Monte Carlo Sampling by Ijeoma Winifred Ekwegh The newsvendor model is used in solving inventory problems in which demand is random. In this thesis, we will focus on a method of using Monte Carlo sampling to estimate the order quantity that will either maximizes revenue or minimizes cost given that demand is uncertain. Given data, the Monte Carlo approach will be used in sampling data over scenarios and also estimating the probability density function. A bootstrapping process yields an empirical distribution for the order quantity that will maximize the expected profit. Finally, this method will be used on a newsvendor example to show that it works in maximizing profit.
127

利用預測分析-篩選及檢視再保險契約中之承保風險 / Selecting and Monitoring Insurance Risk on Reinsurance Treaties Using Predictive Analysis

吳家安, Wu, Chiao-An Unknown Date (has links)
傳統的保險人在面對保險契約所承保的風險時,常會藉由國際上的再保險市場來分散其保險風險。由於所承保險事件的不確定性,保險人需要謹慎小心評估其保險風險並將承保風險轉移至再保險人。再保險有兩種主要的保險型式,可區分成比例再保契約及超額損失再保契約,保險人將利用這些再保險契約來分散求償給付時的損失,加強保險人本身的財務清償能力。 本研究,主要在於建構未來損失求償幅度或頻率的預測分佈並模擬未來支付求償的損失。簡單重點重複抽樣法是一種從危險參數的驗後分佈中抽樣的抽樣方法。然而,蒙地卡羅模擬是一種利用大量電腦運算計算近似預測分佈的逼近方法。利用被選取危險參數的驗前分佈來模擬其驗後分佈,並建構可能的承保危險參數結構,將基於馬可夫鏈蒙地卡羅理論的吉普生抽樣方法決定最適自留額,同時運用於再保險合約決策擬定過程。 最後,考慮於不同的再保險契約下來衡量再保險人的自負財務風險。基本上我們研究的對象是針對保險人所承保的風險,再藉由上述的方法來模擬、近似以量化所衍生的財務風險。這將有助於保險人清楚地瞭解其承保的風險,並對其承保業務做妥善的財務風險管理。本研究提供保險人具體的模型建構方法並對此建構技巧做詳細說明及實證分析。 / Insurers traditionally transfer their insurance risk through the international reinsurance market. Due to the uncertainty of these insured risks, the primary insurer need to carefully evaluate the insured risk and further transfer these risks to his ceding reinsurers. There are two major types of reinsurance, i.e. pro rata treaty and excess of loss treaty, used in protecting the claim losses. In this article, the predictive distribution of the claim size is constructed to monitor the future claim underwriting losses based on the reinsurance agreement. Simple Importance Resampling (SIR) are employed in sampling the posterior distribution of risk parameters. Then Monte Carlo simulations are used to approximate the predictive distribution. Plausible prior distributions of these risk parameters are chosen in simulation its posterior distribution. Markov chain Monte Carlo (MCMC) method using Gibbs sampling scheme is also performed based on possible parametric structures. Both the pro rata and excess of loss treaties are investigated to quantify the retention risks of the ceding reinsurers. The insurance risks are focused in our model. Through the implemented model and simulation techniques, it is beneficial for the primary insurer in projecting his underwriting risks. The results show a significant advantage and flexibility using this approach in risk management. This article outlines the procedure of building the model. Finally a practical case study is performed for numerical illustrated.
128

Signal Processing Algorithms For Digital Image Forensics

Prasad, S 02 1900 (has links)
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.
129

Quantile Estimation based on the Almost Sure Central Limit Theorem / Schätzung von Quantilen basierend auf dem zentralen Grenzwertsatz in der fast sicheren Version

Thangavelu, Karthinathan 25 January 2006 (has links)
No description available.
130

Zlepšení rozlišení pro vícečetné snímky stejné scény / Superresolution

Mezera, Lukáš January 2010 (has links)
Úkolem této diplomové práce je navrhnout vlastní metodu pro zvýšení rozlišení v obraze scény, pokud je k dispozici více snímků dané scény. V teoretické části diplomové práce jsou jako nejlepší metody pro zvýšení rozlišení v obraze vybrány ty, které jsou založeny na principech zpracování signálu. Dále jsou popsány základní požadavky metod pro zvýšení rozlišení v obraze při přítomnosti více snímků stejné scény a jejich typická struktura. Následuje stručný přehled těchto metod a jejich vzájemné porovnání podle optimálních kritérií. Praktická část diplomové práce se zabývá samotným návrhem metody pro zvýšení rozlišení v obraze, pokud je k dispozici více snímků této scény. První navržená metoda je naimplementována a otestována. Při testování této metody je však  zjištěna její špatná funkčnost pro snímky scény s nízkým rozlišením, které vznikly vzájemnou rotací. Z toho důvodu je navržena vylepšená metoda pro zvýšení rozlišení v obraze. Tato metoda využívá při svém výpočtu robustních technik. Díky tomu je již vylepšená metoda nezávislá na rotaci mezi snímky scény s nízkým rozlišením. I tato metoda je řádně otestována a její výsledky jsou porovnány s výsledky první navržené metody pro zvýšení rozlišení v obraze. V porovnání výpočetních časů je lepší první navrhovaná metoda, avšak její výsledky pro obrazy obsahující rotace nejsou kvalitní. Oproti tomu pro obrazy, které vznikly pouze posunem při snímání scény, jsou tyto výsledky velice dobré. Vylepšená metoda je tedy využitelná zejména pro obrazy obsahující rotace. V závěru této práce je ještě navrženo jedno vylepšení, které by mohlo zlepšit výsledky druhé navrhnuté metody pro zvýšení rozlišení v obraze scény.

Page generated in 0.1508 seconds