• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 464
  • 63
  • 56
  • 56
  • 55
  • 48
  • 44
  • 43
  • 41
  • 40
  • 37
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Color Image Processing based on Graph Theory

Pérez Benito, Cristina 22 July 2019 (has links)
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas. / [CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes. / [EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view. / Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955 / TESIS
92

Prognostisering av försäkringsärenden : Hur brytpunktsdetektion och effekter av historiska lag– och villkorsförändringar kan användas i utvecklingen av prognosarbete / Forecasting of insurance claims : How breakpoint detection and effects of historical legal and policy changes can be used in the development of forecasting

Tengborg, Sebastian, Widén, Joakim January 2013 (has links)
I denna rapport presenteras ett tillvägagångssätt för att hitta och datera brytpunkter i tidsserier. En brytpunkt definieras av det datum då det skett en stor nivåförändring i tidsserien. Det presenteras även en strategi för att skatta effekten av daterade brytpunkter. Genom att analysera tidsserier över AFA Försäkrings ärendeinflöde visar det sig att brytpunkter i tidsserien sammanfaller med exogena händelser som kan ha orsakat dessa brytpunkter, till exempel villkors- eller lagförändringar inom försäkringsbranschen. Rapporten visar att det genom ett metodiskt angreppssätt går att skatta effekten av en exogen händelse. Dessa skattade effekter kan användas vid framtida prognoser då en liknande förändring förväntas inträffa. Dessutom skapas prognoser över ärendeinflödet två år framåt med olika tidsseriemodeller.
93

Opportunism vid nedskrivningsprövning av goodwill? : En kritisk studie av tidigare angivna förklaringar till avvikelser mellan en genom CAPM beräknad diskonteringsränta och den av företaget redovisade, vid nedskrivningsprövning av goodwill.

Carlborg, Christian, Renman Claesson, Ludvig January 2012 (has links)
År 2005 implementerades IFRS 3 och IAS 36 i Sverige. I och med detta genomför företag nedskrivningsprövningar av goodwill. Dessa kan inbegripa nuvärdesberäkningar av framtida kassaflöden. Forskarna Carlin och Finch utförde år 2009 en studie på australiensiska börsnoterade företag för att undersöka om diskonteringsräntor, vilka används vid en nedskrivningsprövning, sätts opportunistiskt. Studien genomfördes genom att de visade på förekomsten av avvikelser mellan diskonteringsräntan som företagen redovisat och en av forskarna estimerad teoretisk diskonteringsränta beräknad genom the Capital Asset Pricing Model [CAPM]. Carlin och Finch hävdar att användandet av diskonteringsräntor vilka avvek mer än 150 räntepunkter från de teoretiska diskonteringsräntorna inte kan förklaras av estimeringsfel och därmed är i linje med opportunistiskt beteende. Det har presenterats olika former av opportunism som förklaring till dessa avvikande diskonteringsräntor. Dessa inbegriper opportunistiskt beteende genom earnings managagement i form av big bath och income smoothing. Denna studie undersöker om avvikande diskonteringsräntor förekommer och om förklaringarna presenterade av Carlin och Finch har bärighet år 2010 för företag noterade på Nasdaq OMX Stockholm. Detta genom att använda samma metod som Carlin och Finch gällande beräknandet av teoretiska diskonteringsräntor för att sedan relatera detta till resultatutveckling och faktiskt utförd goodwillnedskrivning. Denna studie visar att avvikelser mellan företagens redovisade och en genom CAPM beräknad teoretisk diskonteringsränta tycks vara vanligt förekommande och att avvikelser som kan förklaras av big bath förekommer, detta tycks dock vara ovanligt. Ingen avvikelse mellan redovisad och teoretisk diskonteringsränta kan påvisas som kan förklaras av opportunistiskt beteende genom income smoohting i syfte att dämpa resultat. Vidare framför denna studie kritik av tidigare studiers slutsatser om förekomst av agerande i linje med opportunism då redovisad diskonteringsränta avviker från en genom CAPM beräknad diskonteringsränta. / In 2005 IFRS 3 and IAS 36 were implemented in Sweden. As of this companies perform impairment testing of goodwill. These impairment tests may include discounted cash flow analyses. The researchers Carlin and Finch conducted a study in 2009 of Australian listed companies to investigate if the discount rates used in these impairment tests possibly were used opportunistically. They did this by demonstrating deviations between the discount rates that companies reported and discount rates calculated by the researchers using the Capital Asset Pricing Model [CAPM]. Carlin and Finch argues that reported discount rates that deviated more than 150 basis points from the estimated discount rates cannot be explained by estimation error and is thus consistent with opportunistic behavior. Explanations were presented by Carlin and Finch concerning the occurrence of these deviations. These include earnings management in the form of big bath and income smoothing.   This study examines whether deviating discount rates occur and if the explanations presented by Carlin and Finch can be documented for companies listed on Nasdaq OMX Stockholm in 2010. This is conducted by using the same method as Carlin and Finch regarding the calculation of the discount rates. Further this is related to earnings and actual performed goodwill impairments. This study shows that deviations between reported discount rates and theoretical discount rates, estimated by CAPM, are prevalent and that these deviations may have been motivated by big bath, though this appears to be unusual. No deviations between reported and theoretical discount rates can be shown that can be explained by opportunistic behavior by conducting income smoothing to dampen earnings. Furthermore, in this study criticism is put forth of earlier studies’ conclusions concerning behavior consistent with opportunism explaining deviations between reported and theoretical discount rates calculated using CAPM.
94

Forecasting daily maximum temperature of Umeå

naz, saima January 2015 (has links)
The aim of this study is to get some approach which can help in improving the predictions of daily temperature of Umeå. Weather forecasts are available through various sources nowadays. There are various software and methods available for time series forecasting. Our aim is to investigate the daily maximum temperatures of Umeå, and compare the performance of some methods in forecasting these temperatures. Here we analyse the data of daily maximum temperatures and find the predictions for some local period using methods of autoregressive integrated moving average (ARIMA), exponential smoothing (ETS), and cubic splines.  The forecast package in R is used for this purpose and automatic forecasting methods available in the package are applied for modelling with ARIMA, ETS, and cubic splines. The thesis begins with some initial modelling on univariate time series of daily maximum temperatures. The data of daily maximum temperatures of Umeå from 2008 to 2013 are used to compare the methods using various lengths of training period. On the basis of accuracy measures we try to choose the best method. Keeping in mind the fact that there are various factors which can cause the variability in daily temperature, we try to improve the forecasts in the next part of thesis by using multivariate time series forecasting method on the time series of maximum temperatures together with some other variables. Vector auto regressive (VAR) model from the vars package in R is used to analyse the multivariate time series. Results: ARIMA is selected as the best method in comparison with ETS and cubic smoothing splines to forecast one-step-ahead daily maximum temperature of Umeå, with the training period of one year. It is observed that ARIMA also provides better forecasts of daily temperatures for the next two or three days. On the basis of this study, VAR (for multivariate time series) does not help to improve the forecasts significantly. The proposed ARIMA with one year training period is compatible with the forecasts of daily maximum temperature of Umeå obtained from Swedish Meteorological and Hydrological Institute (SMHI).
95

Extending covariance structure analysis for multivariate and functional data

Sheppard, Therese January 2010 (has links)
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
96

Hybridation GPS/Vision monoculaire pour la navigation autonome d'un robot en milieu extérieur / Outdoor robotic navigation by GPS and monocular vision sensors fusion

Codol, Jean-Marie 15 February 2012 (has links)
On assiste aujourd'hui à l'importation des NTIC (Nouvelles Technologies de l'Information et de la Télécommunication) dans la robotique. L'union de ces technologies donnera naissance, dans les années à venir, à la robotique de service grand-public.Cet avenir, s'il se réalise, sera le fruit d'un travail de recherche, amont, dans de nombreux domaines : la mécatronique, les télécommunications, l'automatique, le traitement du signal et des images, l'intelligence artificielle ... Un des aspects particulièrement intéressant en robotique mobile est alors le problème de la localisation et de la cartographie simultanée. En effet, dans de nombreux cas, un robot mobile, pour accéder à une intelligence, doit nécessairement se localiser dans son environnement. La question est alors : quelle précision pouvons-nous espérer en terme de localisation? Et à quel coût?Dans ce contexte, un des objectifs de tous les laboratoires de recherche en robotique, objectif dont les résultats sont particulièrement attendus dans les milieux industriels, est un positionnement et une cartographie de l'environnement, qui soient à la fois précis, tous-lieux, intègre, bas-coût et temps-réel. Les capteurs de prédilection sont les capteurs peu onéreux tels qu'un GPS standard (de précision métrique), et un ensemble de capteurs embarquables en charge utile (comme les caméras-vidéo). Ce type de capteurs constituera donc notre support privilégié, dans notre travail de recherche. Dans cette thèse, nous aborderons le problème de la localisation d'un robot mobile, et nous choisirons de traiter notre problème par l'approche probabiliste. La démarche est la suivante, nous définissons nos 'variables d'intérêt' : un ensemble de variables aléatoires. Nous décrivons ensuite leurs lois de distribution, et leur modèles d'évolution, enfin nous déterminons une fonction de coût, de manière à construire un observateur (une classe d'algorithme dont l'objectif est de déterminer le minimum de notre fonction de coût). Notre contribution consistera en l'utilisation de mesures GPS brutes GPS (les mesures brutes - ou raw-datas - sont les mesures issues des boucles de corrélation de code et de phase, respectivement appelées mesures de pseudo-distances de code et de phase) pour une navigation bas-coût précise en milieu extérieur suburbain. En utilisant la propriété dite 'entière' des ambiguïtés de phase GPS, nous étendrons notre navigation pour réaliser un système GPS-RTK (Real Time Kinematic) en mode différentiel local précise et bas-coût. Nos propositions sont validées par des expérimentations réalisées sur notre démonstrateur robotique. / We are witnessing nowadays the importation of ICT (Information and Communications Technology) in robotics. These technologies will give birth, in upcoming years, to the general public service robotics. This future, if realised, shall be the result of many research conducted in several domains: mechatronics, telecommunications, automatics, signal and image processing, artificial intelligence ... One particularly interesting aspect in mobile robotics is hence the simultaneous localisation and mapping problem. Consequently, to access certain informations, a mobile robot has, in many cases, to map/localise itself inside its environment. The following question is then posed: What precision can we aim for in terms of localisation? And at what cost?In this context, one of the objectives of many laboratories indulged in robotics research, and where results impact directly the industry, is the positioning and mapping of the environment. These latter tasks should be precise, adapted everywhere, integrated, low-cost and real-time. The prediction sensors are inexpensive ones, such as a standard GPS (of metric precision), and a set of embeddable payload sensors (e.g. video cameras). These type of sensors constitute the main support in our work.In this thesis, we shed light on the localisation problem of a mobile robot, which we choose to handle with a probabilistic approach. The procedure is as follows: we first define our "variables of interest" which are a set of random variables, and then we describe their distribution laws and their evolution models. Afterwards, we determine a cost function in such a manner to build up an observer (an algorithmic class where the objective is to minimize the cost function).Our contribution consists of using brute GPS measures (brute measures or raw datas are measures issued from code and phase correlation loops, called pseudo-distance measures of code and phase, respectively) for a low-cost navigation, which is precise in an external suburban environment. By implementing the so-called "whole" property of GPS phase ambiguities, we expand the navigation to achieve a GPS-RTK (Real-Time Kinematic) system in a precise and low-cost local differential mode.Our propositions has been validated through experimentations realized on our robotic demonstrator.
97

Prediction and variable selection in sparse ultrahigh dimensional additive models

Ramirez, Girly Manguba January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Haiyan Wang / The advance in technologies has enabled many fields to collect datasets where the number of covariates (p) tends to be much bigger than the number of observations (n), the so-called ultrahigh dimensionality. In this setting, classical regression methodologies are invalid. There is a great need to develop methods that can explain the variations of the response variable using only a parsimonious set of covariates. In the recent years, there have been significant developments of variable selection procedures. However, these available procedures usually result in the selection of too many false variables. In addition, most of the available procedures are appropriate only when the response variable is linearly associated with the covariates. Motivated by these concerns, we propose another procedure for variable selection in ultrahigh dimensional setting which has the ability to reduce the number of false positive variables. Moreover, this procedure can be applied when the response variable is continuous or binary, and when the response variable is linearly or non-linearly related to the covariates. Inspired by the Least Angle Regression approach, we develop two multi-step algorithms to select variables in sparse ultrahigh dimensional additive models. The variables go through a series of nonlinear dependence evaluation following a Most Significant Regression (MSR) algorithm. In addition, the MSR algorithm is also designed to implement prediction of the response variable. The first algorithm called MSR-continuous (MSRc) is appropriate for a dataset with a response variable that is continuous. Simulation results demonstrate that this algorithm works well. Comparisons with other methods such as greedy-INIS by Fan et al. (2011) and generalized correlation procedure by Hall and Miller (2009) showed that MSRc not only has false positive rate that is significantly less than both methods, but also has accuracy and true positive rate comparable with greedy-INIS. The second algorithm called MSR-binary (MSRb) is appropriate when the response variable is binary. Simulations demonstrate that MSRb is competitive in terms of prediction accuracy and true positive rate, and better than GLMNET in terms of false positive rate. Application of MSRb to real datasets is also presented. In general, MSR algorithm usually selects fewer variables while preserving the accuracy of predictions.
98

Measurement of biomass concentration using a microwave oven and analysis of data for estimation of specific rates

Buono, Mark Anthony. January 1985 (has links)
Call number: LD2668 .T4 1985 B86 / Master of Science
99

ANALYSIS OF VOCAL FOLD KINEMATICS USING HIGH SPEED VIDEO

Unnikrishnan, Harikrishnan 01 January 2016 (has links)
Vocal folds are the twin in-folding of the mucous membrane stretched horizontally across the larynx. They vibrate modulating the constant air flow initiated from the lungs. The pulsating pressure wave blowing through the glottis is thus the source for voiced speech production. Study of vocal fold dynamics during voicing are critical for the treatment of voice pathologies. Since the vocal folds move at 100 - 350 cycles per second, their visual inspection is currently done by strobosocopy which merges information from multiple cycles to present an apparent motion. High Speed Digital Laryngeal Imaging(HSDLI) with a temporal resolution of up to 10,000 frames per second has been established as better suited for assessing the vocal fold vibratory function through direct recording. But the widespread use of HSDLI is limited due to lack of consensus on the modalities like features to be examined. Development of the image processing techniques which circumvents the need for the tedious and time consuming effort of examining large volumes of recording has room for improvement. Fundamental questions like the required frame rate or resolution for the recordings is still not adequately answered. HSDLI cannot get the absolute physical measurement of the anatomical features and vocal fold displacement. This work addresses these challenges through improved signal processing. A vocal fold edge extraction technique with subpixel accuracy, suited even for hard to record pediatric population is developed first. The algorithm which is equally applicable for pediatric and adult subjects, is implemented to facilitate user inspection and intervention. Objective features describing the fold dynamics, which are extracted from the edge displacement waveform are proposed and analyzed on a diverse dataset of healthy males, females and children. The sampling and quantization noise present in the recordings are analyzed and methods to mitigate them are investigated. A customized Kalman smoothing and spline interpolation on the displacement waveform is found to improve the feature estimation stability. The relationship between frame rate, spatial resolution and vibration for efficient capturing of information is derived. Finally, to address the inability to measure physical measurement, a structured light projection calibrated with respect to the endoscope is prototyped.
100

Factors influencing U.S. canine heartworm (Dirofilaria immitis) prevalence

Wang, Dongmei, Bowman, Dwight, Brown, Heidi, Harrington, Laura, Kaufman, Phillip, McKay, Tanja, Nelson, Charles, Sharp, Julia, Lund, Robert January 2014 (has links)
BACKGROUND:This paper examines the individual factors that influence prevalence rates of canine heartworm in the contiguous United States. A data set provided by the Companion Animal Parasite Council, which contains county-by-county results of over nine million heartworm tests conducted during 2011 and 2012, is analyzed for predictive structure. The goal is to identify the factors that are important in predicting high canine heartworm prevalence rates.METHODS:The factors considered in this study are those envisioned to impact whether a dog is likely to have heartworm. The factors include climate conditions (annual temperature, precipitation, and relative humidity), socio-economic conditions (population density, household income), local topography (surface water and forestation coverage, elevation), and vector presence (several mosquito species). A baseline heartworm prevalence map is constructed using estimated proportions of positive tests in each county of the United States. A smoothing algorithm is employed to remove localized small-scale variation and highlight large-scale structures of the prevalence rates. Logistic regression is used to identify significant factors for predicting heartworm prevalence.RESULTS:All of the examined factors have power in predicting heartworm prevalence, including median household income, annual temperature, county elevation, and presence of the mosquitoes Aedes trivittatus, Aedes sierrensis and Culex quinquefasciatus. Interactions among factors also exist.CONCLUSIONS:The factors identified are significant in predicting heartworm prevalence. The factor list is likely incomplete due to data deficiencies. For example, coyotes and feral dogs are known reservoirs of heartworm infection. Unfortunately, no complete data of their populations were available. The regression model considered is currently being explored to forecast future values of heartworm prevalence.

Page generated in 0.0792 seconds