• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 65
  • 16
  • 13
  • 10
  • 9
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 386
  • 386
  • 79
  • 66
  • 55
  • 50
  • 50
  • 44
  • 41
  • 40
  • 37
  • 34
  • 34
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Automatic American Sign Language Imitation Evaluator

Feng, Qianli 16 September 2016 (has links)
No description available.
292

A Predictive (RIVPACS-Type) Model for Streams of the Western Allegheny Plateau

North, Sheila H. 02 October 2008 (has links)
No description available.
293

Early Detection of Dicamba and 2,4-D Herbicide Injuries on Soybean with LeafSpec, an Accurate Handheld Hyperspectral Leaf Scanner

Zhongzhong Niu (13133583) 22 July 2022 (has links)
<p>  </p> <p>Dicamba (3,6-dichloro-2-methoxybenzoic acid) and 2,4-D (2,4-dichlorophenoxyacetic acid) are two widely used herbicides for broadleaf weed control in soybeans. However, off-target application of dicamba and 2,4-D can cause severe damage to sensitive vegetation and crops. Early detection and assessment of off-target damage caused by these herbicides are necessary to help plant diagnostic labs and state regulatory agencies collect more information of the on-site conditions so to develop solutions to resolve the issue in the future. In 2021, the study was conducted to detect damage to soybean leaves caused by dicamba and 2,4-D by using LeafSpec, an accurate handheld hyperspectral leaf scanner. . High resolution single leaf hyperspectral images of 180 soybean plants in the greenhouse exposed to nine different herbicide treatments were taken 1, 7, 14, 21 and 28 days after herbicide spraying. Pairwise PLS-DA models based on spectral features were able to distinguish leaf damage caused by two different modes of action herbicides, specifically dicamba and 2,4-D, as early as 2 hours after herbicide spraying. In the spatial distribution analysis, texture and morphological features were selected for separating the dosages of herbicide treatments. Compared to the mean spectrum method, new models built upon the spectrum, texture, and morphological features, improved the overall accuracy to over 70% for all evaluation dates. The combined features are able to classify the correct dosage of the right herbicide as early as 7 days after herbicide sprays. Overall, this work has demonstrated the potential of using spectral and spatial features of LeafSpec hyperspectral images for early and accurate detection of dicamba and 2,4-D damage in soybean plants.</p> <p>   </p>
294

PiEye in the Wild: Exploring Eye Contact Detection for Small Inexpensive Hardware

Einestam, Ragnar, Casserfelt, Karl January 2017 (has links)
Ögonkontakt-sensorer skapar möjligheten att tolka användarens uppmärksamhet, vilketkan användas av system på en mängd olika vis. Dessa inkluderar att skapa nya möjligheterför människa-dator-interaktion och mäta mönster i uppmärksamhet hos individer.I den här uppsatsen gör vi ett försök till att konstruera en ögonkontakt-sensor med hjälpav en Raspberry Pi, med målet att göra den praktisk i verkliga scenarion. För att fastställaatt den är praktisk satte vi upp ett antal kriterier baserat på tidigare användning avögonkontakt-sensorer. För att möta dessa kriterier valde vi att använda en maskininlärningsmetodför att träna en klassificerare med bilder för att lära systemet att upptäcka omen användare har ögonkontakt eller ej. Vårt mål var att undersöka hur god prestanda vikunde uppnå gällande precision, hastighet och avstånd. Efter att ha testat kombinationerav fyra olika metoder för feature extraction kunde vi fastslå att den bästa övergripandeprecisionen uppnåddes genom att använda LDA-komprimering på pixeldatan från varjebild, medan PCA-komprimering var bäst när input-bilderna liknande de från träningen.När vi undersökte systemets hastighet fann vi att nedskalning av bilder hade en stor effektpå hastigheten, men detta sänkte också både precision och maximalt avstånd. Vi lyckadesminska den negativa effekten som en minskad skala hos en bild hade på precisionen, mendet maximala avståndet som sensorn fungerade på var fortfarande relativ till skalan och iförlängningen hastigheten. / Eye contact detection sensors have the possibility of inferring user attention, which can beutilized by a system in a multitude of different ways, including supporting human-computerinteraction and measuring human attention patterns. In this thesis we attempt to builda versatile eye contact sensor using a Raspberry Pi that is suited for real world practicalusage. In order to ensure practicality, we constructed a set of criteria for the system basedon previous implementations. To meet these criteria, we opted to use an appearance-basedmachine learning method where we train a classifier with training images in order to inferif users look at the camera or not. Our aim was to investigate how well we could detecteye contacts on the Raspberry Pi in terms of accuracy, speed and range. After extensivetesting on combinations of four different feature extraction methods, we found that LinearDiscriminant Analysis compression of pixel data provided the best overall accuracy, butPrincipal Component Analysis compression performed the best when tested on imagesfrom the same dataset as the training data. When investigating the speed of the system,we found that down-scaling input images had a huge effect on the speed, but also loweredthe accuracy and range. While we managed to mitigate the effects the scale had on theaccuracy, the range of the system is still relative to the scale of input images and byextension speed.
295

DEVELOPMENT OF NON-DESTRUCTIVE INFRARED FIBER OPTIC METHOD FOR ASSESSMENT OF LIGAMENT AND TENDON COMPOSITION

Padalkar, Mugdha Vijay January 2016 (has links)
More than 350,000 anterior cruciate ligament (ACL) injuries occur every year in the United States. A torn ACL is typically replaced with an allograft or autograft tendon (patellar, quadriceps or hamstring), with the choice of tissue generally dictated by surgeon preference. Despite the number of ACL reconstructions performed every year, the process of ligamentization, transformation of a tendon graft to a healthy functional ligament, is poorly understood. Previous research studies have relied on mechanical, biochemical and histological studies. However, these methods are destructive. Clinically, magnetic resonance imaging (MRI) is the most common method of graft evaluation, but it lacks adequate resolution and molecular specificity. There is a need for objective methodology to study the ligament repair process that would ideally be non- or minimally invasive. Development of such a method could lead to a better understanding of the effects of therapeutic interventions and rehabilitation protocols in animal models of ligamentization, and ultimately, in clinical studies. Fourier transform infrared (FT-IR) spectroscopy is a technique sensitive to molecular structure and composition in tissues. FT-IR fiber optic probes combined with arthroscopy could prove to be an important tool where minimally invasive tissue assessment is required, such as assessment of graft composition during the ligamentization process. Spectroscopic methods have been used to differentiate normal and diseased connective tissues, but have not been applied to investigate ligamentization, or to investigate differences in tendons and ligaments. In the proposed studies, we hypothesize that infrared spectroscopy can provide molecular information about the compositional differences between tendons and ligaments, which can serve as a foundation to non-destructively monitor the tissue transformation that occurs during ligamentization. / Bioengineering
296

Some Advances in Classifying and Modeling Complex Data

Zhang, Angang 16 December 2015 (has links)
In statistical methodology of analyzing data, two of the most commonly used techniques are classification and regression modeling. As scientific technology progresses rapidly, complex data often occurs and requires novel classification and regression modeling methodologies according to the data structure. In this dissertation, I mainly focus on developing a few approaches for analyzing the data with complex structures. Classification problems commonly occur in many areas such as biomedical, marketing, sociology and image recognition. Among various classification methods, linear classifiers have been widely used because of computational advantages, ease of implementation and interpretation compared with non-linear classifiers. Specifically, linear discriminant analysis (LDA) is one of the most important methods in the family of linear classifiers. For high dimensional data with number of variables p larger than the number of observations n occurs more frequently, it calls for advanced classification techniques. In Chapter 2, I proposed a novel sparse LDA method which generalizes LDA through a regularized approach for the two-class classification problem. The proposed method can obtain an accurate classification accuracy with attractive computation, which is suitable for high dimensional data with p>n. In Chapter 3, I deal with the classification when the data complexity lies in the non-random missing responses in the training data set. Appropriate classification method needs to be developed accordingly. Specifically, I considered the "reject inference problem'' for the application of fraud detection for online business. For online business, to prevent fraud transactions, suspicious transactions are rejected with unknown fraud status, yielding a training data with selective missing response. A two-stage modeling approach using logistic regression is proposed to enhance the efficiency and accuracy of fraud detection. Besides the classification problem, data from designed experiments in scientific areas often have complex structures. Many experiments are conducted with multiple variance sources. To increase the accuracy of the statistical modeling, the model need to be able to accommodate more than one error terms. In Chapter 4, I propose a variance component mixed model for a nano material experiment data to address the between group, within group and within subject variance components into a single model. To adjust possible systematic error introduced during the experiment, adjustment terms can be added. Specifically a group adaptive forward and backward selection (GFoBa) procedure is designed to select the significant adjustment terms. / Ph. D.
297

Board composition, grey directors and corporate failure in the UK

Hsu, Hwa-Hsien, Wu, C.Y-H. 2013 December 1920 (has links)
No / This study examines the effect of board composition on the likelihood of corporate failure in the UK. We consider both independent and non-independent (grey) non-executive directors (NEDs) to enhance our understanding of the impact of NEDs' personal or economic ties with the firm and its management on firm performance. We find that firms with a larger proportion of grey directors on their boards are less likely to fail. Furthermore, the probability of corporate failure is lower both when firms have a higher proportion of grey directors relative to executive directors and when they have a higher proportion of grey directors relative to independent directors. Conversely, there is a positive relationship between the likelihood of corporate failure and the proportion of independent directors on corporate boards. The findings discussed in this study support the collaborative board model and the view that corporate governance reform efforts may have over emphasised the monitoring function of independent directors and underestimated the benefits of NEDs' affiliations with the firm and its management. (C) 2013 Elsevier Ltd. All rights reserved.
298

Statistical modelling by neural networks

Fletcher, Lizelle 30 June 2002 (has links)
In this thesis the two disciplines of Statistics and Artificial Neural Networks are combined into an integrated study of a data set of a weather modification Experiment. An extensive literature study on artificial neural network methodology has revealed the strongly interdisciplinary nature of the research and the applications in this field. An artificial neural networks are becoming increasingly popular with data analysts, statisticians are becoming more involved in the field. A recursive algoritlun is developed to optimize the number of hidden nodes in a feedforward artificial neural network to demonstrate how existing statistical techniques such as nonlinear regression and the likelihood-ratio test can be applied in innovative ways to develop and refine neural network methodology. This pruning algorithm is an original contribution to the field of artificial neural network methodology that simplifies the process of architecture selection, thereby reducing the number of training sessions that is needed to find a model that fits the data adequately. [n addition, a statistical model to classify weather modification data is developed using both a feedforward multilayer perceptron artificial neural network and a discriminant analysis. The two models are compared and the effectiveness of applying an artificial neural network model to a relatively small data set assessed. The formulation of the problem, the approach that has been followed to solve it and the novel modelling application all combine to make an original contribution to the interdisciplinary fields of Statistics and Artificial Neural Networks as well as to the discipline of meteorology. / Mathematical Sciences / D. Phil. (Statistics)
299

Konkursprognostisering : En tillämpning av tre internationella modeller

Malm, Hanna, Rodriguez, Edith January 2015 (has links)
Bakgrund: Varje år går många företag i konkurs och detta innebär stora kostnader på kort sikt. Kreditgivare, ägare, investerare, borgenärer, företagsledning, anställda samt samhället är de som i störst utsträckning drabbas av detta. För att kunna bedöma ett företags ekonomiska hälsa är det därför en viktig del att kunna prognostisera risken för en konkurs. Till hjälp har vi olika konkursmodeller som har utvecklats sedan början av 1960-talet och fram till idag. Syfte: Att undersöka tre internationella konkursmodeller för att se om dessa kan tillämpas på svenska företag samt jämföra träffsäkerheten från vår studie med konkursmodellernas originalstudier. Metod: Undersökningen är baserad på en kvantitativ forskningsstrategi med en deduktiv ansats. Urvalet grundas på företag som gick i konkurs år 2014. Till detta kommer också en kontrollgrupp bestående av lika stor andel friska företag att undersökas. Det slumpmässiga urvalet kom att bestå av 30 konkursföretag samt 30 friska företag från tillverknings- och industribranschen. Teori: I denna studie undersöks tre konkursmodeller; Altman, Fulmer och Springate. Dessa modeller och tidigare forskning presenteras utförligare i teoriavsnittet. Dessutom beskrivs under teoriavsnittet några nyckeltal som är relevanta vid konkursprediktion. Resultat och slutsats: Modellerna är inte tillämpbara på svenska företag då resultaten från vår studie inte visar tillräcklig träffsäkerhet och är därför måste betecknas som otillförlitliga. / Background: Each year many companies go bankrupt and it is associated with significant costs in the short term. Creditors, owners, investors, management, employees and society are those that gets most affected by the bankruptcy. To be able to estimate a company’s financial health it is important to be able to predict the risk of a bankruptcy. To help, we have different bankruptcy prediction models that have been developed through time, since the 1960s until today, year 2015. Purpose: To examine three international bankruptcy prediction models to see if they are  applicable to Swedish business and also compare the accuracy from our study with each bankruptcy prediction models original study. Method: The study was based on a quantitative research strategy and also a deductive research approach. The selection was based on companies that went bankrupt in year 2014. Added to this is a control group consisting of healthy companies that will also be examined. Finally, the random sample consisted of 30 bankrupt companies and 30 healthy companies that belong to the manufacturing and industrial sectors. Theory: In this study three bankruptcy prediction models are examined; Altman, Fulmer and Springate. These models and also previous research in bankruptcy prediction are further described in the theory section. In addition some financial ratios that are relevant in bankruptcy prediction are also described. Result and conclusion: The models are not applicable in the Swedish companies.  The results of this study have not showed sufficient accuracy and they can therefore be regarded as unreliable.
300

The identification and application of common principal components

Pepler, Pieter Theo 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: When estimating the covariance matrices of two or more populations, the covariance matrices are often assumed to be either equal or completely unrelated. The common principal components (CPC) model provides an alternative which is situated between these two extreme assumptions: The assumption is made that the population covariance matrices share the same set of eigenvectors, but have di erent sets of eigenvalues. An important question in the application of the CPC model is to determine whether it is appropriate for the data under consideration. Flury (1988) proposed two methods, based on likelihood estimation, to address this question. However, the assumption of multivariate normality is untenable for many real data sets, making the application of these parametric methods questionable. A number of non-parametric methods, based on bootstrap replications of eigenvectors, is proposed to select an appropriate common eigenvector model for two population covariance matrices. Using simulation experiments, it is shown that the proposed selection methods outperform the existing parametric selection methods. If appropriate, the CPC model can provide covariance matrix estimators that are less biased than when assuming equality of the covariance matrices, and of which the elements have smaller standard errors than the elements of the ordinary unbiased covariance matrix estimators. A regularised covariance matrix estimator under the CPC model is proposed, and Monte Carlo simulation results show that it provides more accurate estimates of the population covariance matrices than the competing covariance matrix estimators. Covariance matrix estimation forms an integral part of many multivariate statistical methods. Applications of the CPC model in discriminant analysis, biplots and regression analysis are investigated. It is shown that, in cases where the CPC model is appropriate, CPC discriminant analysis provides signi cantly smaller misclassi cation error rates than both ordinary quadratic discriminant analysis and linear discriminant analysis. A framework for the comparison of di erent types of biplots for data with distinct groups is developed, and CPC biplots constructed from common eigenvectors are compared to other types of principal component biplots using this framework. A subset of data from the Vermont Oxford Network (VON), of infants admitted to participating neonatal intensive care units in South Africa and Namibia during 2009, is analysed using the CPC model. It is shown that the proposed non-parametric methodology o ers an improvement over the known parametric methods in the analysis of this data set which originated from a non-normally distributed multivariate population. CPC regression is compared to principal component regression and partial least squares regression in the tting of models to predict neonatal mortality and length of stay for infants in the VON data set. The tted regression models, using readily available day-of-admission data, can be used by medical sta and hospital administrators to counsel parents and improve the allocation of medical care resources. Predicted values from these models can also be used in benchmarking exercises to assess the performance of neonatal intensive care units in the Southern African context, as part of larger quality improvement programmes. / AFRIKAANSE OPSOMMING: Wanneer die kovariansiematrikse van twee of meer populasies beraam word, word dikwels aanvaar dat die kovariansiematrikse of gelyk, of heeltemal onverwant is. Die gemeenskaplike hoofkomponente (GHK) model verskaf 'n alternatief wat tussen hierdie twee ekstreme aannames gele e is: Die aanname word gemaak dat die populasie kovariansiematrikse dieselfde versameling eievektore deel, maar verskillende versamelings eiewaardes het. 'n Belangrike vraag in die toepassing van die GHK model is om te bepaal of dit geskik is vir die data wat beskou word. Flury (1988) het twee metodes, gebaseer op aanneemlikheidsberaming, voorgestel om hierdie vraag aan te spreek. Die aanname van meerveranderlike normaliteit is egter ongeldig vir baie werklike datastelle, wat die toepassing van hierdie metodes bevraagteken. 'n Aantal nie-parametriese metodes, gebaseer op skoenlus-herhalings van eievektore, word voorgestel om 'n geskikte gemeenskaplike eievektor model te kies vir twee populasie kovariansiematrikse. Met die gebruik van simulasie eksperimente word aangetoon dat die voorgestelde seleksiemetodes beter vaar as die bestaande parametriese seleksiemetodes. Indien toepaslik, kan die GHK model kovariansiematriks beramers verskaf wat minder sydig is as wanneer aanvaar word dat die kovariansiematrikse gelyk is, en waarvan die elemente kleiner standaardfoute het as die elemente van die gewone onsydige kovariansiematriks beramers. 'n Geregulariseerde kovariansiematriks beramer onder die GHK model word voorgestel, en Monte Carlo simulasie resultate toon dat dit meer akkurate beramings van die populasie kovariansiematrikse verskaf as ander mededingende kovariansiematriks beramers. Kovariansiematriks beraming vorm 'n integrale deel van baie meerveranderlike statistiese metodes. Toepassings van die GHK model in diskriminantanalise, bi-stippings en regressie-analise word ondersoek. Daar word aangetoon dat, in gevalle waar die GHK model toepaslik is, GHK diskriminantanalise betekenisvol kleiner misklassi kasie foutkoerse lewer as beide gewone kwadratiese diskriminantanalise en line^ere diskriminantanalise. 'n Raamwerk vir die vergelyking van verskillende tipes bi-stippings vir data met verskeie groepe word ontwikkel, en word gebruik om GHK bi-stippings gekonstrueer vanaf gemeenskaplike eievektore met ander tipe hoofkomponent bi-stippings te vergelyk. 'n Deelversameling van data vanaf die Vermont Oxford Network (VON), van babas opgeneem in deelnemende neonatale intensiewe sorg eenhede in Suid-Afrika en Namibi e gedurende 2009, word met behulp van die GHK model ontleed. Daar word getoon dat die voorgestelde nie-parametriese metodiek 'n verbetering op die bekende parametriese metodes bied in die ontleding van hierdie datastel wat afkomstig is uit 'n nie-normaal verdeelde meerveranderlike populasie. GHK regressie word vergelyk met hoofkomponent regressie en parsi ele kleinste kwadrate regressie in die passing van modelle om neonatale mortaliteit en lengte van verblyf te voorspel vir babas in die VON datastel. Die gepasde regressiemodelle, wat maklik bekombare dag-van-toelating data gebruik, kan deur mediese personeel en hospitaaladministrateurs gebruik word om ouers te adviseer en die toewysing van mediese sorg hulpbronne te verbeter. Voorspelde waardes vanaf hierdie modelle kan ook gebruik word in normwaarde oefeninge om die prestasie van neonatale intensiewe sorg eenhede in die Suider-Afrikaanse konteks, as deel van groter gehalteverbeteringprogramme, te evalueer.

Page generated in 0.0954 seconds