251 |
Parallel implementation of curve reconstruction from noisy samplesRandrianarivony, Maharavo, Brunnett, Guido 06 April 2006 (has links)
This paper is concerned with approximating noisy
samples by non-uniform rational B-spline curves
with special emphasis on free knots. We show how to
set up the problem such that nonlinear optimization
methods can be applied efficiently. This involves
the introduction of penalizing terms in order to
avoid undesired knot positions. We report on our
implementation of the nonlinear optimization and we
show a way to implement the program in parallel.
Parallel performance results are described. Our
experiments show that our program has a linear
speedup and an efficiency value close to unity.
Runtime results on a parallel computer are
displayed.
|
252 |
Parallel implementation of surface reconstruction from noisy samplesRandrianarivony, Maharavo, Brunnett, Guido 06 April 2006 (has links)
We consider the problem of reconstructing a surface from noisy samples by approximating the point set with non-uniform rational B-spline surfaces. We focus on the fact that the knot sequences should also be part of the unknown variables that include the control points and the weights in order to find their optimal positions. We show how to set up the free knot problem such that constrained nonlinear optimization can be applied efficiently. We describe in detail a parallel implementation of our approach that give almost linear speedup. Finally, we provide numerical results obtained on the Chemnitzer Linux Cluster supercomputer.
|
253 |
Effektive Beobachtung von zufälligen Funktionen unter besonderer Berücksichtigung von AbleitungenHoltmann, Markus 14 June 2001 (has links)
Es wird die Versuchsplanung für die Approximation zufälliger Funktionen untersucht, wobei sowohl deterministische Spline-, stochastisch-deterministische Krigingverfahren als auch Regressionsverfahren jeweils unter Verwendung von Ableitungssamples betrachtet werden. Dabei wird das mathematische Gerüst für den Beweis einer allgemeinen Äquivalenz zwischen Kriging- und Splineverfahren entwickelt. Für den in den praktischen Anwendungen wichtigen Fall der Verwendung endlich vieler nichthermitescher Samples wird ein Versuchsplanungsverfahren für zufällige Funktionen mit asymptotisch verschwindender Korrelation entwickelt. Ferner wird der Einfluss von Ableitungen auf die Varianz von (lokalen) Regressionsschätzern untersucht. Schließlich wird ein Verfahren zur Versuchsplanung vorgestellt, das durch Regularisierung mittels gestörter Kovarianzmatrizen Prinzipien der klassischen Versuchsplanung im korrelierten Fall nachahmt.
|
254 |
Evolution temporelle du champ magnétique lunaire / Temporal evolution of the lunar magnetic fieldLepaulard, Camille 28 November 2018 (has links)
Il est établi que la Lune a eu par le passé un champ généré par une dynamo interne. Cependant, les mécanismes à l’origine et permettant le maintien de la dynamo sont encore mal connus. La durée de ce champ magnétique est encore débattue. Mon travail de thèse a consisté tout d’abord à une caractérisation magnétique (aimantation naturelle et susceptibilité magnétique) d’une grande partie de la collection Apollo avec l’étude de 161 roches. J'ai utilisé le rapport aimantation naturelle sur susceptibilité comme indicateur grossier de la paléointensité. Ces résultats, cohérents avec les deux grandes époques du champ magnétique lunaire (époque de fort champ avant ~3.5 Ga et champ faible ensuite), ont permis de sélectionner des échantillons pour des analyses paléomagnétiques détaillées en laboratoire qui ont constitué la suite de mon travail. J’ai ainsi étudié l’aimantation naturelle de 25 échantillons Apollo et 2 météorites lunaires. Différentes techniques ont permis d’obtenir 8 valeurs de paléointensités (1-47 µT) et 7 limites supérieures de paléointensité (< 30 µT). Ces données, couplées aux âges radiométriques (existants et nouvellement acquis), retracent l’évolution du champ de surface lunaire au cours du temps. Les résultats corroborent l’existence d’une période de champ fort (4-3.5 Ga) et prolongent cette période jusqu’à environ 3 Ga. Les paléointensités > 1 µT que nous obtenons jusqu’à 0.1 Ga indiquent un arrêt très tardif de la dynamo. De plus, de faibles paléointensités sont obtenues dans l’époque de champ fort, suggérant une valeur de champ moyen plus faible que proposé dans la littérature. Cette étude permet de mieux contraindre l'évolution de ce champ lunaire. / It is admitted that the Moon used to have a magnetic field, generated by an internal dynamo. However, the mechanisms responsible for the dynamo and its preservation are still poorly known today. The lifetime of the magnetic field is also debated. My thesis was focused first on the magnetic characterization (natural magnetization and magnetic susceptibility) of a large part of the Apollo collection, with the study of 161 Apollo rocks. I used the ratio of the natural magnetization to the magnetic susceptibility to obtain an approximate indicator of paleointensity. Results of this ratio were coherent with the two major epochs determined in the lunar magnetic field (high field epoch before ~3.5 Ga and a weak field epoch after) and allowed me to select samples for detailed paleomagnetic analyses in another part of my thesis. Then, I studied in laboratory the natural magnetization of 25 Apollo samples and 2 lunar meteorites. Different methods were used to obtain 8 paleointensities values (between 1 and 47 µT) and 7 upper limits of paleointensity (< 30 µT).These data were coupled with radiometric ages to trace the evolution of the lunar surface field over time. These results corroborate the existence of a strong field epoch (4-3.5 Ga) and extend this epoch until ~3 Ga. Paleointensities of values > 1 µT obtained until 0.1 Ga indicates a very late interruption of the dynamo. Weak paleointensities were obtained in the high field epoch, suggesting a value of average field lower than previously proposed in literature. This study allows to better constrain the temporal evolution of the lunar magnetic field.
|
255 |
Estimation of Pareto Distribution Functions from Samples Contaminated by Measurement ErrorsKondlo, Lwando Orbet January 2010 (has links)
>Magister Scientiae - MSc / Estimation of population distributions, from samples that are contaminated
by measurement errors, is a common problem. This study considers the problem
of estimating the population distribution of independent random variables
Xi, from error-contaminated samples ~i (.j = 1, ... , n) such that Yi = Xi + f·.i,
where E is the measurement error, which is assumed independent of X. The
measurement error ( is also assumed to be normally distributed. Since the
observed distribution function is a convolution of the error distribution with
the true underlying distribution, estimation of the latter is often referred to
as a deconvolution problem. A thorough study of the relevant deconvolution
literature in statistics is reported.
We also deal with the specific case when X is assumed to follow a truncated
Pareto form. If observations are subject to Gaussian errors, then the observed
Y is distributed as the convolution of the finite-support Pareto and Gaussian
error distributions. The convolved probability density function (PDF)
and cumulative distribution function (CDF) of the finite-support Pareto and
Gaussian distributions are derived.
The intention is to draw more specific connections bet.ween certain deconvolution
methods and also to demonstrate the application of the statistical theory
of estimation in the presence of measurement error.
A parametric methodology for deconvolution when the underlying distribution
is of the Pareto form is developed.
Maximum likelihood estimation (MLE) of the parameters of the convolved distributions
is considered. Standard errors of the estimated parameters are calculated
from the inverse Fisher's information matrix and a jackknife method.
Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof-
fit tests are used to evaluate the fit of the posited distribution. A bootstrapping
method is used to calculate the critical values of the K-S test statistic,
which are not available.
Simulated data are used to validate the methodology. A real-life application
of the methodology is illustrated by fitting convolved distributions to astronomical
data
|
256 |
Gitarr och elektroniska verktyg : Hur man, som sologitarrist, kan skapa en musikalisk helhet med elektroniska verktygAndersson, Arvid January 2022 (has links)
This study seeks to explore the opportunities and boundaries of creating music as a solo guitarist with different electronic tools. These tools refer to samples, loops, and effects. Furthermore, this study aims to discover the opportunities existing in Logic Pro X for creating more randomized music. To achieve this, I use different samplers and MIDI-plugins and explore the different ways to bring forward the improvisational element in these. I also investigate how, and if, the use of samples and loops have an influence on my improvisations and compositions. The result of this study is a self-composed suite, consisting of three parts. In these three parts, I use all the techniques and methods I assembled over the course of this study. The study concludes that the use of samples, loops and effects had a notably influence on my compositions and improvisations. My compositions took a more unconventional form of composing, the use of graphical charts, and my improvisations became somewhat notably influenced by the effects being used or the samples being played. I also discovered different ways and techniques to modify, manipulate and randomize sounds using Logic Pro X. / <p>Arvid Andersson, elgitarr</p><p><em>Kaoset </em>- Arvid Andersson</p><p><em>Viskningarna </em>- Arvid Andersson</p><p><em>Hemmet</em> - Arvid Andersson</p><p></p>
|
257 |
Factorisation du rendu de Monte-Carlo fondée sur les échantillons et le débruitage bayésien / Factorization of Monte Carlo rendering based on samples and Bayesian denoisingBoughida, Malik 23 March 2017 (has links)
Le rendu de Monte-Carlo par lancer de rayons est connu depuis longtemps pour être une classe d’algorithmes de choix lorsqu’il s’agit de générer des images de synthèse photo-réalistes. Toutefois, sa nature fondamentalement aléatoire induit un bruit caractéristique dans les images produites. Dans cette thèse, nous mettons en œuvre des algorithmes fondés sur les échantillons de Monte-Carlo et l’inférence bayésienne pour factoriser le calcul du rendu, par le partage d’information entre pixels voisins ou la mise en cache de données précédemment calculées. Dans le cadre du rendu à temps long, en nous fondant sur une technique récente de débruitage en traitement d’images, appelée Non-local Bayes, nous avons développé un algorithme de débruitage collaboratif par patchs, baptisé Bayesian Collaborative Denoising. Celui-ci est conçu pour être adapté aux spécificités du bruit des rendus de Monte-Carlo et aux données supplémentaires qu’on peut obtenir par des statistiques sur les échantillons. Dans un deuxième temps, pour factoriser les calculs de rendus de Monte-Carlo en temps interactif dans un contexte de scène dynamique, nous proposons un algorithme de rendu complet fondé sur le path tracing, appelé Dynamic Bayesian Caching. Une partition des pixels permet un regroupement intelligent des échantillons. Ils sont alors en nombre suffisant pour pouvoir calculer des statistiques sur eux. Ces statistiques sont comparées avec celles stockées en cache pour déterminer si elles doivent remplacer ou enrichir les données existantes. Finalement un débruitage bayésien, inspiré des travaux de la première partie, est appliqué pour améliorer la qualité de l’image. / Monte Carlo ray tracing is known to be a particularly well-suited class of algorithms for photorealistic rendering. However, its fundamentally random nature breeds noise in the generated images. In this thesis, we develop new algorithms based on Monte Carlo samples and Bayesian inference in order to factorize rendering computations, by sharing information across pixels or by caching previous results. In the context of offline rendering, we build upon a recent denoising technique from the image processing community, called Non-local Bayes, to develop a new patch-based collaborative denoising algorithm, named Bayesian Collaborative Denoising. It is designed to be adapted to the specificities of Monte Carlo noise, and uses the additionnal input data that we can get by gathering per-pixel sample statistics. In a second step, to factorize computations of interactive Monte Carlo rendering, we propose a new algorithm based on path tracing, called Dynamic Bayesian Caching. A clustering of pixels enables a smart grouping of many samples. Hence we can compute meaningful statistics on them. These statistics are compared with the ones that are stored in a cache to decide whether the former should replace or be merged with the latter. Finally, a Bayesian denoising, inspired from the works of the first part, is applied to enhance image quality.
|
258 |
Strategies for Sparsity-based Time-Frequency AnalysesZhang, Shuimei, 0000-0001-8477-5417 January 2021 (has links)
Nonstationary signals are widely observed in many real-world applications, e.g., radar, sonar, radio astronomy, communication, acoustics, and vibration applications. Joint time-frequency (TF) domain representations provide a time-varying spectrum for their analyses, discrimination, and classifications. Nonstationary signals commonly exhibit sparse occupancy in the TF domain. In this dissertation, we incorporate such sparsity to enable robust TF analysis in impaired observing environments.
In practice, missing data samples frequently occur during signal reception due to various reasons, e.g., propagation fading, measurement obstruction, removal of impulsive noise or narrowband interference, and intentional undersampling. Missing data samples in the time domain lend themselves to be missing entries in the instantaneous autocorrelation function (IAF) and induce artifacts in the TF representation (TFR). Compared to random missing samples, a more realistic and more challenging problem is the existence of burst missing data samples. Unlike the effects of random missing samples, which cause the artifacts to be uniformly spread over the entire TF domain, the artifacts due to burst missing samples are highly localized around the true instantaneous frequencies, rendering extremely challenging TF analyses for which many existing methods become ineffective.
In this dissertation, our objective is to develop novel signal processing techniques that offer effective TF analysis capability in the presence of burst missing samples. We propose two mutually related methods that recover missing entries in the IAF and reconstruct high-fidelity TFRs, which approach full-data results with negligible performance loss. In the first method, an IAF slice corresponding to the time or lag is converted to a Hankel matrix, and its missing entries are recovered via atomic norm minimization. The second method generalizes this approach to reduce the effects of TF crossterms. It considers an IAF patch, which is reformulated as a low-rank block Hankel matrix, and the annihilating filter-based approach is used to interpolate the IAF and recover the missing entries. Both methods are insensitive to signal magnitude differences. Furthermore, we develop a novel machine learning-based approach that offers crossterm-free TFRs with effective autoterm preservation. The superiority and usefulness of the proposed methods are demonstrated using simulated and real-world signals. / Electrical and Computer Engineering
|
259 |
TIME-FREQUENCY ANALYSIS TECHNIQUES FOR NON-STATIONARY SIGNALS USING SPARSITYAMIN, VAISHALI, 0000-0003-0873-3981 January 2022 (has links)
Non-stationary signals, particularly frequency modulated (FM) signals which arecharacterized by their time-varying instantaneous frequencies (IFs), are fundamental
to radar, sonar, radio astronomy, biomedical applications, image processing, speech
processing, and wireless communications. Time-frequency (TF) analyses of such signals
provide two-dimensional mapping of time-domain signals, and thus are regarded
as the most preferred technique for detection, parameter estimation, analysis and
utilization of such signals.
In practice, these signals are often received with compressed measurements as a
result of either missing samples, irregular samplings, or intentional under-sampling of
the signals. These compressed measurements induce undesired noise-like artifacts in
the TF representations (TFRs) of such signals. Compared to random missing data,
burst missing samples present a more realistic, yet a more challenging, scenario for
signal detection and parameter estimation through robust TFRs. In this dissertation,
we investigated the effects of burst missing samples on different joint-variable domain
representations in detail.
Conventional TFRs are not designed to deal with such compressed observations.
On the other hand, sparsity of such non-stationary signals in the TF domain facilitates
utilization of sparse reconstruction-based methods. The limitations of conventional
TF approaches and the sparsity of non-stationary signals in TF domain motivated us
to develop effective TF analysis techniques that enable improved IF estimation of such
signals with high resolution, mitigate undesired effects of cross terms and artifacts
and achieve highly concentrated robust TFRs, which is the goal of this dissertation.
In this dissertation, we developed several TF analysis techniques that achieved
the aforementioned objectives. The developed methods are mainly classified into two
three broad categories: iterative missing data recovery, adaptive local filtering based TF approach, and signal stationarization-based approaches. In the first category,
we recovered the missing data in the instantaneous auto-correlation function (IAF)
domain in conjunction with signal-adaptive TF kernels that are adopted to mitigate
undesired cross-terms and preserve desired auto-terms. In these approaches, we took
advantage of the fact that such non-stationary signals become stationary in the IAF
domain at each time instant. In the second category, we developed a novel adaptive
local filtering-based TF approach that involves local peak detection and filtering of
TFRs within a window of a specified length at each time instant. The threshold for
each local TF segment is adapted based on the local maximum values of the signal
within that segment. This approach offers low-complexity, and is particularly
useful for multi-component signals with distinct amplitude levels. Finally, we developed
knowledge-based TFRs based on signal stationarization and demonstrated
the effectiveness of the proposed TF techniques in high-resolution Doppler analysis
of multipath over-the-horizon radar (OTHR) signals. This is an effective technique
that enables improved target parameter estimation in OTHR operations. However,
due to high proximity of these Doppler signatures in TF domain, their separation
poses a challenging problem. By utilizing signal self-stationarization and ensuring IF
continuity, the developed approaches show excellent performance to handle multiple
signal components with variations in their amplitude levels. / Electrical and Computer Engineering
|
260 |
Identification of Disease-Associated Cryptococcal Proteins Reactive With Serum IgG From Cryptococcal Meningitis PatientsGressler, A. Elisabeth, Volke, Daniela, Firacative, Carolina, Schnabel, Christiane L., Müller, Uwe, Krizsan, Andor, Schulze-Richter, Bianca, Brock, Matthias, Brombacher, Frank, Escandón, Patricia, Hoffmann, Ralf, Alber, Gottfried 24 March 2023 (has links)
Cryptococcus neoformans, an opportunistic fungal pathogen ubiquitously present in the
environment, causes cryptococcal meningitis (CM) mainly in immunocompromised
patients, such as AIDS patients. We aimed to identify disease-associated cryptococcal
protein antigens targeted by the human humoral immune response. Therefore, we used
sera from Colombian CM patients, with or without HIV infection, and from healthy
individuals living in the same region. Serological analysis revealed increased titers of
anti-cryptococcal IgG in HIV-negative CM patients, but not HIV-positive CM patients,
compared to healthy controls. In contrast, titers of anti-cryptococcal IgM were not affected
by CM. Furthermore, we detected pre-existing IgG and IgM antibodies even in sera from
healthy individuals. The observed induction of anti-cryptococcal IgG but not IgM during
CM was supported by analysis of sera from C. neoformans-infected mice. Stronger
increase in IgG was found in wild type mice with high lung fungal burden compared to
IL-4Ra-deficient mice showing low lung fungal burden. To identify the proteins targeted by
human anti-cryptococcal IgG antibodies, we applied a quantitative 2D immunoproteome
approach identifying cryptococcal protein spots preferentially recognized by sera from CM
patients or healthy individuals followed by mass spectrometry analysis. Twenty-three
cryptococcal proteins were recombinantly expressed and confirmed to be
immunoreactive with human sera. Fourteen of them were newly described as
immunoreactive proteins. Twelve proteins were classified as disease-associated
antigens, based on significantly stronger immunoreactivity with sera from CM patients
compared to healthy individuals. The proteins identified in our screen significantly expand
the pool of cryptococcal proteins with potential for (i) development of novel anticryptococcal
agents based on implications in cryptococcal virulence or survival, or
(ii) development of an anti-cryptococcal vaccine, as several candidates lack homology
to human proteins and are localized extracellularly. Furthermore, this study defines preexisting
anti-cryptococcal immunoreactivity in healthy individuals at a molecular level,
identifying target antigens recognized by sera from healthy control persons.
|
Page generated in 0.0479 seconds