Spelling suggestions: "subject:"histogram"" "subject:"histograma""
121 |
Modèles de mélange semi-paramétriques et applications aux tests multiples / Semi-parametric mixture models and applications to multiple testingNguyen, Van Hanh 01 October 2013 (has links)
Dans un contexte de test multiple, nous considérons un modèle de mélange semi-paramétrique avec deux composantes. Une composante est supposée connue et correspond à la distribution des p-valeurs sous hypothèse nulle avec probabilité a priori p. L'autre composante f est nonparamétrique et représente la distribution des p-valeurs sous l'hypothèse alternative. Le problème d'estimer les paramètres p et f du modèle apparaît dans les procédures de contrôle du taux de faux positifs (``false discovery rate'' ou FDR). Dans la première partie de cette dissertation, nous étudions l'estimation de la proportion p. Nous discutons de résultats d'efficacité asymptotique et établissons que deux cas différents arrivent suivant que f s'annule ou non surtout un intervalle non-vide. Dans le premier cas (annulation surtout un intervalle), nous présentons des estimateurs qui convergent \`{a} la vitesse paramétrique, calculons la variance asymptotique optimale et conjecturons qu'aucun estimateur n'est asymptotiquement efficace (i.e atteint la variance asymptotique optimale). Dans le deuxième cas, nous prouvons que le risque quadratique de n'importe quel estimateur ne converge pas à la vitesse paramétrique. Dans la deuxième partie de la dissertation, nous nous concentrons sur l'estimation de la composante inconnue nonparamétrique f dans le mélange, en comptant sur un estimateur préliminaire de p. Nous proposons et étudions les propriétés asymptotiques de deux estimateurs différents pour cette composante inconnue. Le premier estimateur est un estimateur à noyau avec poids aléatoires. Nous établissons une borne supérieure pour son risque quadratique ponctuel, en montrant une vitesse de convergence nonparamétrique classique sur une classe de Holder. Le deuxième estimateur est un estimateur du maximum de vraisemblance régularisée. Il est calculé par un algorithme itératif, pour lequel nous établissons une propriété de décroissance d'un critère. De plus, ces estimateurs sont utilisés dans une procédure de test multiple pour estimer le taux local de faux positifs (``local false discovery rate'' ou lfdr). / In a multiple testing context, we consider a semiparametric mixture model with two components. One component is assumed to be known and corresponds to the distribution of p-values under the null hypothesis with prior probability p. The other component f is nonparametric and stands for the distribution under the alternative hypothesis. The problem of estimating the parameters p and f of the model appears from the false discovery rate control procedures. In the first part of this dissertation, we study the estimation of the proportion p. We discuss asymptotic efficiency results and establish that two different cases occur whether f vanishes on a non-empty interval or not. In the first case, we exhibit estimators converging at parametric rate, compute the optimal asymptotic variance and conjecture that no estimator is asymptotically efficient (i.e. attains the optimal asymptotic variance). In the second case, we prove that the quadratic risk of any estimator does not converge at parametric rate. In the second part of the dissertation, we focus on the estimation of the nonparametric unknown component f in the mixture, relying on a preliminary estimator of p. We propose and study the asymptotic properties of two different estimators for this unknown component. The first estimator is a randomly weighted kernel estimator. We establish an upper bound for its pointwise quadratic risk, exhibiting the classical nonparametric rate of convergence over a class of Holder densities. The second estimator is a maximum smoothed likelihood estimator. It is computed through an iterative algorithm, for which we establish a descent property. In addition, these estimators are used in a multiple testing procedure in order to estimate the local false discovery rate.
|
122 |
An Empirical Study of Students’ Performance at Assessing Normality of Data Through Graphical MethodsLeander Aggeborn, Noah, Norgren, Kristian January 2019 (has links)
When applying statistical methods for analyzing data, with normality as an assumption there are different procedures of determining if a sample is drawn from a normally distributed population. Because normality is such a central assumption, the reliability of the procedures is of most importance. Much research focus on how good formal tests of normality are, while the performance of statisticians when using graphical methods are far less examined. Therefore, the aim of the study was to empirically examine how good students in statistics are at assessing if samples are drawn from normally distributed populations through graphical methods, done by a web survey. The results of the study indicate that the students distinctly get better at accurately determining normality in data drawn from a normally distributed population when the sample size increases. Further, the students are very good at accurately rejecting normality of data when the sample is drawn from a symmetrical non-normal population and fairly good when the sample is drawn from an asymmetrical distribution. In comparison to some common formal tests of normality, the students' performance is superior at accurately rejecting normality for small sample sizes and inferior for large, when drawn from a non-normal population.
|
123 |
AVALIAÇÃO DE METODOLOGIAS DE AQUISIÇÃO DE IMAGENS PARA FINS DE ANÁLISE DE QUALIDADE DE SEMENTES DE MILHOCaldas, Evanise Araujo 29 January 2014 (has links)
Made available in DSpace on 2017-07-21T14:19:43Z (GMT). No. of bitstreams: 1
EVANISE ARAUJO CALD.pdf: 7566314 bytes, checksum: 376003b7f419eb861b263f45501b2c37 (MD5)
Previous issue date: 2014-01-29 / Brazil is one of the largest seed producers in the world. Therefore, one must be sought ways to ensure seed quality, free from mechanical damage and/or infection.
Computational techniques can assist on detecting these problems, indicating seeds that are or are not following the standards of the existing legislation. The use of digital
image processing in agriculture has been established in various areas of knowledge. In agriculture, the digital image processing can help in the area of visual inspection of seeds, a tedious task and with human subjectivity. The digital image processing consists of several steps: image acquisition, preprocessing, segmentation, recognition and
interpretation. Among all these steps, the acquisition is the most important, since following stages depend on it to get the desired information. The acquisition has an
image capture device, lighting, among other items. An acquisition system that is not developed to a defined purpose can produce inefficient images for the image processing system. For instance, the system can detect a non-existent disease or mechanical damages in the seed, presenting false positives and false negatives, due to the presence of residues in the environment or a wrong image resolution. The objective of this work is to choose the best methodology for capturing images for analysis of seed quality of maize. This was done through two measures of distances between histograms, the intersection and the correlation. To evaluate the performance of the developed
methodologies in terms of repeatability and reproducibility, three replications of nine image groups were performed after the systematic rearrangement of equipment in each
repetition; and three replicates of nine image groups in which the equipments were not removed from setup; at the end, each repetition had 81 images. As a result, it was
verified the need to perform a calibration procedure of the acquisition system at each repetition; and that there is a constancy in the images, for the same repetition, obtaining, for the best case, a distance between histograms of 0.99804 ± 0.00124, with the
correlation metric. / O Brasil é um dos maiores produtores de sementes mundiais. Portanto, devem-se buscar meios de garantir sementes de qualidade, livres de danos mecânicos e/ou infecções. Técnicas computacionais podem auxiliar na detecção desses problemas, indicando sementes que estão ou não no padrão da legislação vigente. O uso do processamento digital de imagens na agricultura vem sendo estabelecido em diversas áreas de conhecimento. Na agricultura, o processamento digital de imagens pode auxiliar na área de inspeção visual de sementes, uma tarefa tediosa e com subjetividade humana. O
processamento digital de imagens é composto por várias etapas: aquisição de imagem, pré-processamento, segmentação, reconhecimento e interpretação. Dentre todas essas etapas a aquisição é uma das mais importantes, pois, as fases seguintes dependem dela
para obter a informação desejada. Na aquisição tem-se o dispositivo de captura das imagens, a iluminação da cena, dentre outros itens. Um sistema de aquisição não
direcionado para o fim que se propõe pode produzir imagens ineficazes para o sistema de processamento de imagem. Por exemplo, detectar uma doença ou danos mecânicos inexistentes em uma semente, ou seja, apresentando falsos positivos e falsos negativos,
devido à presença de resíduos no ambiente ou à resolução da imagem estar errada. O objetivo deste trabalho é escolher a melhor metodologia para a captura de imagens para fins de análise de qualidade de sementes de milho. Isso foi feito através de duas
medidas de distâncias entre histogramas de imagens, a interseção e a correlação. Para avaliar o desempenho das metodologias elaboradas, em termos de repetitividade e
reprodutibilidade, foram realizadas três repetições, de nove grupos de imagens, após o rearranjo sistemático dos equipamentos em cada repetição; e três repetições, de nove grupos de imagens, em que os equipamentos não foram retirados do lugar; sendo que em cada repetição eram obtidas 81 imagens. Como resultados, verificaram-se a necessidade de efetuar um procedimento de calibração do sistema de aquisição em cada repetição; e uma constância nas imagens, para a mesma repetição, obtendo, para o melhor caso, uma distância entre histogramas, pela métrica da correlação, de 0,99804 ±
0,00124.
|
124 |
Image matching using rotating filters / Mise en correspondance d'images avec des filtres tournantsVenkatrayappa, Darshan 04 December 2015 (has links)
De nos jours les algorithmes de vision par ordinateur abondent dans les applications de vidéo-surveillance, de reconstruction 3D, de véhicules autonomes, d'imagerie médicale, etc… La détection et la mise en correspondance d'objets dans les images constitue une étape clé dans ces algorithmes.Les méthodes les plus communes pour la mise en correspondance d'objets ou d'images sont basées sur des descripteurs locaux, avec tout d'abord la détection de points d'intérêt, puis l'extraction de caractéristiques de voisinages des points d'intérêt, et enfin la construction des descripteurs d'image.Dans cette thèse, nous présentons des contributions au domaine de la mise en correspondance d'images par l'utilisation de demi filtres tournants. Nous suivons ici trois approches : la première présente un nouveau descripteur à faible débit et une stratégie de mise en correspondance intégrés à une plateforme vidéo. Deuxièmement, nous construisons un nouveau descripteur local en intégrant la réponse de demi filtres tournant dans un histogramme de gradient orienté (HOG) ; enfin nous proposons une nouvelle approche pour la construction d'un descripteur utilisant des statistiques du second ordre. Toutes ces trois approches apportent des résultats intéressants et prometteurs.Mots-clés : Demi filtres tournants, descripteur local d'image, mise en correspondance, histogramme de gradient orienté (HOG), Différence de gaussiennes. / Nowadays computer vision algorithms can be found abundantly in applications relatedto video surveillance, 3D reconstruction, autonomous vehicles, medical imaging etc. Image/object matching and detection forms an integral step in many of these algorithms.The most common methods for Image/object matching and detection are based on localimage descriptors, where interest points in the image are initially detected, followed byextracting the image features from the neighbourhood of the interest point and finally,constructing the image descriptor. In this thesis, contributions to the field of the imagefeature matching using rotating half filters are presented. Here we follow three approaches:first, by presenting a new low bit-rate descriptor and a cascade matching strategy whichare integrated on a video platform. Secondly, we construct a new local image patch descriptorby embedding the response of rotating half filters in the Histogram of Orientedgradient (HoG) framework and finally by proposing a new approach for descriptor constructionby using second order image statistics. All the three approaches provides aninteresting and promising results by outperforming the state of art descriptors.Key-words: Rotating half filters, local image descriptor, image matching, Histogram of Orientated Gradients (HoG), Difference of Gaussian (DoG).
|
125 |
Amyotrophic lateral sclerosis (ALS) associated with superoxide dismutase 1 (SOD1) mutations in British Columbia, Canada : clinical, neurophysiological and neuropathological featuresStewart, Heather G. January 2005 (has links)
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disorder characterized by loss of motor neurons and their supporting cells in the brain, brainstem and spinal cord, resulting in muscle paresis and paralysis including the bulbar (speech, chewing, swallowing) and respiratory muscles. The average age at onset is 55 years, and death due to respiratory failure occurs 2-5 years after symptom onset in ~ 85% of cases. Five to 10% of ALS is familial, and about 20% of familial cases are associated with mutations in the superoxide dismutase 1 (SOD1) gene. To date, 118 SOD1 mutations have been reported worldwide (www alsod.org). All are dominantly inherited, except for the D90A mutation, which is typically recessively inherited. D90A homozygous ALS is associated with long (~14 years) survival, and some atypical symptoms and signs. The reason for this is not known. In contrast, most other SOD1 mutations are associated with average survival, while some are associated with aggressive disease having lower motor neuron predominance and survival less than 12 months. The A4V mutation, which is the most frequently occurring SOD1 mutation in the United States, is an example of the latter. Understanding the pathogenic mechanisms of SOD1 mutants causing widely different disease forms like D90A and A4V is of paramount importance. Overwhelming scientific evidence indicates that mutations in the SOD1 gene are cytotoxic by a “gain of noxious” function, which although not fully understood results in protein aggregation and loss of cell function. This thesis explores different ALS-SOD1 gene mutations in British Columbia (BC), Canada. Two hundred and fifty-three ALS patients were screened for SOD1 mutations, and 12 (4.7%) unrelated patients were found to carry one of 5 different SOD1 mutations: A4V (n=2); G72C (n=1); D76Y (n=1); D90A (n=2); and 113T (n=6). Incomplete penetrance was observed in 3/12 families. Bulbar onset disease was not observed in the SOD1 mutation carriers in this study, but gender distribution was similar to previously reported studies. Age at symptom onset for all patients enrolled, with or without SOD1 mutations, was older than reported in previous studies. On average, patients with SOD1 mutations experience a longer diagnostic delay (22.6 months) compared to patients without mutations (12 months). Two SOD1 patients were originally misdiagnosed including the G72C patient who’s presenting features resembled a proximal myopathy. Neuropathological examination of this patient failed to reveal upper motor neuron disease. The I113T mutation was associated with variable age of onset and survival time, and was found in 2 apparently sporadic cases. The D76Y mutation was also found in an apparently sporadic case. I113T and D76Y are likely influenced by other genetic or environmental factors in some individuals. Two patients were homozygous for the D90A mutation, with clinical features comparable to patients originally described in Scandinavia. Clinical and electrophysiological motor neuron abnormalities were observed in heterozygous relatives of one D90A homozygous patient. The A4V patients were similar to those described in previous studies, although one had significant upper motor neuron disease both clinically and neuropathologically. Clinical neurophysiology is essential in the diagnosis of ALS, and helpful in monitoring disease progression. A number of transcranial magnetic stimulation (TMS) studies may detect early dysfunction of upper motor neurons when imaging techniques lack sensitivity. Peristimulus time histograms (PSTHs), which assess corticospinal function via recording of voluntarily activated single motor units during low intensity TMS of the motor cortex, were used to study 19 ALS patients having 5 different SOD1 mutations (including 8 of the 12 patients identified with SOD1 mutations from BC). Results were compared with idiopathic ALS cases, patients with multiple sclerosis (MS), and healthy controls. Significant differences were found in corticospinal pathophysiology between ALS patients with SOD1 mutations, idiopathic ALS, and MS patients. In addition, different SOD1 mutants were associated with significantly different neurophysiologic abnormalities. D90A homozygous patients show preserved if not exaggerated cortical inhibition and slow central conduction, which may reflect the more benign disease course associated with this mutant. In contrast, A4V patients show cortical hyper-excitability and only slightly delayed central conduction. I113T patients display a spectrum of abnormalities. This suggests mutant specific SOD1 pathology(s) of the corticospinal pathways in ALS.
|
126 |
Efficient Index Structures For Video DatabasesAcar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
|
127 |
A Robust Traffic Sign Recognition SystemBecer, Huseyin Caner 01 February 2011 (has links) (PDF)
The traffic sign detection and recognition system is an essential part of the driver warning and assistance systems. In this thesis, traffic sign recognition system is studied. We considered circular, triangular and square Turkish traffic signs. For detection stage, we have two different approaches. In first approach, we assume that the detected signs are available. In the second approach, the region of interest of the traffic sign image is given. Traffic sign is extracted from ROI by using a detection algorithm.
In recognition stage, the ring-partitioned method is implemented. In this method, the traffic sign is divided into rings and the normalized fuzzy histogram is used as an image descriptor. The histograms of these rings are compared with the reference histograms. Ring-partitions provide robustness to rotation because the rotation does not change the histogram of the ring. This is very critical for circle signs because rotation is hard to detect in circle signs. To overcome illumination problem, specified gray scale image is used.
To apply this method to triangle and square signs, the circumscribed circle of these shapes is extracted.
Ring partitioned method is tested for the case where the detected signs are available and the region of interests of the traffic sign is given. The data sets contain about 500 static and video captured images and the images in the data set are taken in daytime.
|
128 |
Theoretical and experimental study of protein-lipid interactions / Theoretische und experimentelle Untersuchung von Protein-Lipid WechselwirkungenIvanova, Vesselka Petrova 01 November 2000 (has links)
No description available.
|
129 |
Segmentation and Contrasting in Different Biomedical Imaging ApplicationsTayyab, Muhammad 02 February 2012 (has links) (PDF)
Advancement in Image Acquisition Equipment and progress in Image Processing Methods have brought the mathematicians and computer scientists into areas which are of huge importance for physicians and biologists. Early diagnosis of diseases like blindness, cancer and digestive problems have been areas of interest in medicine. Development of Laser Photon Microscopy and other advanced equipment already provides a good idea of very interesting characteristics of the object being viewed. Still certain images are not suitable to extract sufficient information out of that image. Image Processing methods have been providing good support to provide useful information about the objects of interest in these biological images. Fast computational methods allow complete analysis, in a very short time, of a series of images, providing a reasonably good idea about the desired characteristics. The thesis covers application of these methods in 3 series of images intended for 3 different types of diagnosis or inference. Firstly, Images of RP-mutated retina were treated for detection of rods, where there were no cones present. The software was able to detect and count the number of cones in each frame. Secondly, a gastrulation process in drosophila was studied to observe any mitosis and results were consistent with recent research. Finally, another series of images were treated where biological cells were observed to undergo mitosis. The source was a video from a photon laser microscope. In this video, objects of interest were biological cells. The idea was to track the cells if they undergo mitosis. Cell position, spacing and sometimes contour of the cell membrane are broadly the factors limiting the accuracy in this video. Appropriate method of image enhancement and segmentation were chosen to develop a computational method to observe this mitosis. Cases where human intervention may be required have been proposed to eliminate any false inference.
|
130 |
Rank statistics of forecast ensemblesSiegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members.
An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank.
A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented.
Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account.
The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
|
Page generated in 0.0389 seconds