251 |
Large scale broadband antenna array systemsEl-Makadema, Ahmed Talal January 2011 (has links)
Broadband antenna arrays have become increasingly popular for various imaging applications, such as radio telescopes and radar, where high sensitivity and resolution are required. High sensitivity requires the development of large scale broadband arrays capable of imaging distant sources at many different wavelengths, in addition to overcoming noise and jamming signals. The design of large scale broadband antenna arrays requires large number antennas, increasing the cost and complexity of the overall system. Moreover, noise sources often vary, depending on their wavelengths and angular locations. This increases the overall design complexity particularly for broadband applications where the performance depends not only on the required bandwidth, but also on the frequency band.This thesis provides a study of broadband antenna array systems for large scale applications. The study investigates different tradeoffs associated with designing such systems and drives a novel design approach to optimize both their cost and performance for a wide range of applications. In addition, the thesis includes measurements of a suitable array to validate the computational predictions. Moreover, the thesis also demonstrates how this study can be utilized to optimize a broadband antenna array system suitable for a low frequency radio telescope.
|
252 |
Représentations redondantes pour les signaux d’électroencéphalographie / Redundant representations for electroencephalography signalsIsaac, Yoann 29 May 2015 (has links)
L’électroencéphalographie permet de mesurer l’activité du cerveau à partir des variations du champ électrique à la surface du crâne. Cette mesure est utilisée pour le diagnostic médical, la compréhension du fonctionnement du cerveau ou dans les systèmes d’interface cerveau-machine. De nombreux travaux se sont attachés au développement de méthodes d’analyse de ces signaux en vue d’en extraire différentes composantes d’intérêt, néanmoins leur traitement pose encore de nombreux problèmes. Cette thèse s’intéresse à la mise en place de méthodes permettant l’obtention de représentations redondantes pour ces signaux. Ces représentations se sont avérées particulièrement efficaces ces dernières années pour la description de nombreuses classes de signaux grâce à leur grande flexibilité. L’obtention de telles représentations pour les mesures EEG présente certaines difficultés du fait d’un faible rapport signal à bruit des composantes recherchées. Nous proposons dans cette thèse de les surmonter en guidant les méthodes considérées vers des représentations physiologiquement plausibles des signaux EEG à l’aide de régularisations. Ces dernières sont construites à partir de connaissances a priori sur les propriétés spatiales et temporelles de ces signaux. Pour chacune d’entre elles, des algorithmes sont proposés afin de résoudre les problèmes d’optimisation associés à l’obtention de ces représentations. L’évaluation des approches proposées sur des signaux EEG souligne l’efficacité des régularisations proposées et l’intérêt des représentations obtenues. / The electroencephalography measures the brain activity by recording variations of the electric field on the surface of the skull. This measurement is usefull in various applications like medical diagnosis, analysis of brain functionning or whithin brain-computer interfaces. Numerous studies have tried to develop methods for analyzing these signals in order to extract various components of interest, however, none of them allows to extract them with sufficient reliabilty. This thesis focuses on the development of approaches considering redundant (overcomoplete) representations for these signals. During the last years, these representations have been shown particularly efficient to describe various classes of signals due to their flexibility. Obtaining such representations for EEG presents some difficuties due to the low signal-to-noise ratio of these signals. We propose in this study to overcome them by guiding the methods considered to physiologically plausible representations thanks to well-suited regularizations. These regularizations are built from prior knowledge about the spatial and temporal properties of these signals. For each regularization, an algorithm is proposed to solve the optimization problem allowing to obtain the targeted representations. The evaluation of the proposed EEG signals approaches highlights their effectiveness in representing them.
|
253 |
The SCExAO high contrast imager: transitioning from commissioning to scienceJovanovic, N., Guyon, O., Lozi, J., Currie, T., Hagelberg, J., Norris, B., Singh, G., Pathak, P., Doughty, D., Goebel, S., Males, J., Kuhn, J., Serabyn, E., Tuthill, P., Schworer, G., Martinache, F., Kudo, T., Kawahara, H., Kotani, T., Ireland, M., Feger, T., Rains, A., Bento, J., Schwab, C., Coutts, D., Cvetojevic, N., Gross, S., Arriola, A., Lagadec, T., Kasdin, J., Groff, T., Mazin, B., Minowa, Y., Takato, N., Tamura, M., Takami, H., Hayashi, M. 26 July 2016 (has links)
SCExAO is the premier high-contrast imaging platform for the Subaru Telescope. It offers high Strehl ratios at near-IR wavelengths (y-K band) with stable pointing and coronagraphs with extremely small inner working angles, optimized for imaging faint companions very close to the host. In the visible, it has several interferometric imagers which offer polarimetric and spectroscopic capabilities. A recent addition is the RHEA spectrograph enabling spatially resolved high resolution spectroscopy of the surfaces of giant stars, for example. New capabilities on the horizon include post-coronagraphic spectroscopy, spectral differential imaging, nulling interferometry as well as an integral field spectrograph and an MKID array. Here we present the new modules of SCExAO, give an overview of the current commissioning status of each of the modules and present preliminary results.
|
254 |
Towards Robust Machine Learning Models for Data ScarcityJanuary 2020 (has links)
abstract: Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision.
To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data.
To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency.
In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2020
|
255 |
Scalable sparse machine learning methods for big dataZeng, Yaohui 15 December 2017 (has links)
Sparse machine learning models have become increasingly popular in analyzing high-dimensional data. With the evolving era of Big Data, ultrahigh-dimensional, large-scale data sets are constantly collected in many areas such as genetics, genomics, biomedical imaging, social media analysis, and high-frequency finance. Mining valuable information efficiently from these massive data sets requires not only novel statistical models but also advanced computational techniques. This thesis focuses on the development of scalable sparse machine learning methods to facilitate Big Data analytics.
Built upon the feature screening technique, the first part of this thesis proposes a family of hybrid safe-strong rules (HSSR) that incorporate safe screening rules into the sequential strong rule to remove unnecessary computational burden for solving the \textit{lasso-type} models. We present two instances of HSSR, namely SSR-Dome and SSR-BEDPP, for the standard lasso problem. We further extend SSR-BEDPP to the elastic net and group lasso problems to demonstrate the generalizability of the hybrid screening idea. In the second part, we design and implement an R package called \texttt{biglasso} to extend the lasso model fitting to Big Data in R. Our package \texttt{biglasso} utilizes memory-mapped files to store the massive data on the disk, only reading data into memory when necessary during model fitting, and is thus able to handle \textit{data-larger-than-RAM} cases seamlessly. Moreover, it's built upon our redesigned algorithm incorporated with the proposed HSSR screening, making it much more memory- and computation-efficient than existing R packages. Extensive numerical experiments with synthetic and real data sets are conducted in both parts to show the effectiveness of the proposed methods.
In the third part, we consider a novel statistical model, namely the overlapping group logistic regression model, that allows for selecting important groups of features that are associated with binary outcomes in the setting where the features belong to overlapping groups. We conduct systematic simulations and real-data studies to show its advantages in the application of genetic pathway selection. We implement an R package called \texttt{grpregOverlap} that has HSSR screening built in for fitting overlapping group lasso models.
|
256 |
Sparse Reconstruction Schemes for Nonlinear Electromagnetic ImagingDesmal, Abdulla 03 1900 (has links)
Electromagnetic imaging is the problem of determining material properties from scattered fields measured away from the domain under investigation. Solving this inverse
problem is a challenging task because (i) it is ill-posed due to the presence of (smoothing) integral operators used in the representation of scattered fields in terms of material properties, and scattered fields are obtained at a finite set of points through noisy measurements; and (ii) it is nonlinear simply due the fact that scattered fields are nonlinear functions of the material properties. The work described in this thesis tackles
the ill-posedness of the electromagnetic imaging problem using sparsity-based regularization techniques, which assume that the scatterer(s) occupy only a small fraction
of the investigation domain. More specifically, four novel imaging methods are formulated and implemented. (i) Sparsity-regularized Born iterative method iteratively
linearizes the nonlinear inverse scattering problem and each linear problem is regularized using an improved iterative shrinkage algorithm enforcing the sparsity constraint.
(ii) Sparsity-regularized nonlinear inexact Newton method calls for the solution of a
linear system involving the Frechet derivative matrix of the forward scattering operator at every iteration step. For faster convergence, the solution of this matrix system is regularized under the sparsity constraint and preconditioned by leveling the matrix singular values. (iii) Sparsity-regularized nonlinear Tikhonov method directly solves
the nonlinear minimization problem using Landweber iterations, where a thresholding function is applied at every iteration step to enforce the sparsity constraint. (iv)
This last scheme is accelerated using a projected steepest descent method when it is
applied to three-dimensional investigation domains. Projection replaces the thresholding operation and enforces the sparsity constraint. Numerical experiments, which
are carried out using synthetically generated or actually measured scattered fields,
show that the images recovered by these sparsity-regularized methods are sharper and
more accurate than those produced by existing methods. The methods developed in
this work have potential application areas ranging from oil/gas reservoir engineering
to biological imaging where sparse domains naturally exist.
|
257 |
Advances in RGB and RGBD Generic Object TrackersBibi, Adel 04 1900 (has links)
Visual object tracking is a classical and very popular problem in computer vision
with a plethora of applications such as vehicle navigation, human computer interface, human motion analysis, surveillance, auto-control systems and many more. Given the initial state of a target in the first frame, the goal of tracking is to predict states of the target over time where the states describe a bounding box covering the target. Despite numerous object tracking methods that have been proposed in recent years [1-4], most of these trackers suffer a degradation in performance mainly because of several challenges that include illumination changes, motion blur, complex motion, out of plane rotation, and partial or full occlusion, while occlusion is usually the most contributing factor in degrading the majority of trackers, if not all of them. This thesis is devoted to the advancement of generic object trackers tackling different challenges through different proposed methods. The work presented propose four
new state-of-the-art trackers. One of which is 3D based tracker in a particle filter framework where both synchronization and registration of RGB and depth streams are adjusted automatically, and three works in correlation filters that achieve state-of-the-art performance in terms of accuracy while maintaining reasonable speeds.
|
258 |
Machine Learning for Metabolite Identification with Mass Spectrometry Data / 質量分析データによる代謝産物識別のための機械学習手法構築NGUYEN, DAI HAI 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(薬科学) / 甲第22754号 / 薬科博第128号 / 新制||薬科||14(附属図書館) / 京都大学大学院薬学研究科医薬創成情報科学専攻 / (主査)教授 馬見塚 拓, 教授 緒方 博之, 教授 石濱 泰 / 学位規則第4条第1項該当 / Doctor of Pharmaceutical Sciences / Kyoto University / DFAM
|
259 |
Statistical methods for imaging data, imaging genetics and sparse estimation in linear mixed modelsOpoku, Eugene A. 21 October 2021 (has links)
This thesis presents research focused on developing statistical methods with emphasis on techniques that can be used for the analysis of data in imaging studies and sparse estimations for applications in high-dimensional data. The first contribution addresses the pixel/voxel-labeling problem for spatial hidden Markov models in image analysis. We formulate a Gaussian spatial mixture model with Potts model used as a prior for mixture allocations for the latent states in the model. Jointly estimating the model parameters, the discrete state variables and the number of states (number of mixture components) is recognized as a difficult combinatorial optimization. To overcome drawbacks associated with local algorithms, we implement and make comparisons between iterated conditional modes (ICM), simulated annealing (SA) and hybrid ICM with ant colony system (ACS-ICM) optimization for pixel labelling, parameter estimation and mixture component estimation.
In the second contribution, we develop ACS-ICM algorithm for spatiotemporal modeling of combined MEG/EEG data for computing estimates of the neural source activity. We consider a Bayesian finite spatial mixture model with a Potts model as a spatial prior and implement the ACS-ICM for simultaneous point estimation and model selection for the number of mixture components. Our approach is evaluated using simulation studies and an application examining the visual response to scrambled faces. In addition, we develop a nonparametric bootstrap for interval estimation to account for uncertainty in the point estimates. In the third contribution, we present sparse estimation strategies in linear mixed model (LMM) for longitudinal data. We address the problem of estimating the fixed effects parameters of the LMM when the model is sparse and predictors are correlated. We propose and derive the asymptotic properties of the pretest and shrinkage estimation strategies. Simulation studies is performed to compare the numerical performance of the Lasso and adaptive Lasso estimators with the pretest and shrinkage ridge estimators. The methodology is evaluated through an application of a high-dimensional data examining effective brain connectivity and genetics.
In the fourth and final contribution, we conduct an imaging genetics study to explore how effective brain connectivity in the default mode network (DMN) may be related to genetics within the context of Alzheimer’s disease. We develop an analysis of longitudinal resting-state functional magnetic resonance imaging (rs-fMRI) and genetic data obtained from a sample of 111 subjects with a total of 319 rs-fMRI scans from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. A Dynamic Causal Model (DCM) is fit to the rs-fMRI scans to estimate effective brain connectivity within the DMN and related to a set of single nucleotide polymorphisms (SNPs) contained in an empirical disease-constrained set. We relate longitudinal effective brain connectivity estimated using spectral DCM to SNPs using both linear mixed effect (LME) models as well as function-on-scalar regression (FSR). / Graduate
|
260 |
Sparse subspace clustering-based motion segmentation with complete occlusion handlingMattheus, Jana January 2021 (has links)
Motion segmentation is part of the computer vision field and aims to find the moving parts in a video sequence. It is used in applications such as autonomous driving, surveillance, robotics, human motion analysis, and video indexing. Since there are so many applications, motion segmentation is ill-defined and the research field is vast. Despite the advances in the research over the years, the existing methods are still far behind human capabilities. Problems such as changes in illumination, camera motion, noise, mixtures of motion, missing data, and occlusion remain challenges.
Feature-based approaches have grown in popularity over the years, especially manifold clustering methods due to their strong mathematical foundation. Methods exploiting sparse and low-rank representations are often used since the dimensionality of the data is reduced while useful information regarding the motion segments is extracted. However, these methods are unable to effectively handle large and complete occlusions as well as missing data since they tend to fail when the amount of missing data becomes too large. An algorithm based on Sparse Subspace Clustering (SSC) has been proposed to address the issue of occlusions and missing data so that SSC can handle these cases with high accuracy. A frame-to-frame analysis was adopted as a pre-processing step to identify motion segments between consecutive frames, called inter-frame motion segments. The pre-processing step is called Multiple Split-And-Merge (MSAM), which is based on the classic top-down split-and-merge algorithm. Only points present in both frame pairs are segmented. This means that a point undergoing an occlusion is only assigned to a motion class when it has been visible for two consecutive frames after re-entering the camera view. Once all the inter-frame segments have been extracted, the results are combined in a single matrix and used as the input for the classic SSC algorithm. Therefore, SSC segments inter-frame motion segments rather than point trajectories. The resulting algorithm is referred to as MSAM-SSC.
MSAM-SSC outperformed some of the most popular manifold clustering methods on the Hopkins155 and KT3DMoSeg datasets. It was also able to handle complete occlusions and 50% missing data sequences, as well as outliers. The algorithm can handle mixtures of motions and different numbers of motions. However, it was found that MSAM-SSC is more suited for traffic and articulate motion scenes which are often used in applications such as robotics, surveillance, and autonomous driving. For future work, the algorithm can be optimised to reduce the execution time so that it can be used for real-time applications. Additionally, the number of moving objects in the scene can be estimated to obtain a method that does not rely on prior knowledge. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2021. / CSIR / Electrical, Electronic and Computer Engineering / MEng (Computer Engineering) / Unrestricted
|
Page generated in 0.1722 seconds