Spelling suggestions: "subject:"multiresolution"" "subject:"multirresolution""
21 |
Multi-resolution methods for high fidelity modeling and control allocation in large-scale dynamical systemsSingla, Puneet 16 August 2006 (has links)
This dissertation introduces novel methods for solving highly challenging model-
ing and control problems, motivated by advanced aerospace systems. Adaptable, ro-
bust and computationally effcient, multi-resolution approximation algorithms based
on Radial Basis Function Network and Global-Local Orthogonal Mapping approaches
are developed to address various problems associated with the design of large scale
dynamical systems. The main feature of the Radial Basis Function Network approach
is the unique direction dependent scaling and rotation of the radial basis function via
a novel Directed Connectivity Graph approach. The learning of shaping and rota-
tion parameters for the Radial Basis Functions led to a broadly useful approximation
approach that leads to global approximations capable of good local approximation
for many moderate dimensioned applications. However, even with these refinements,
many applications with many high frequency local input/output variations and a
high dimensional input space remain a challenge and motivate us to investigate an
entirely new approach. The Global-Local Orthogonal Mapping method is based upon
a novel averaging process that allows construction of a piecewise continuous global
family of local least-squares approximations, while retaining the freedom to vary in
a general way the resolution (e.g., degrees of freedom) of the local approximations.
These approximation methodologies are compatible with a wide variety of disciplines
such as continuous function approximation, dynamic system modeling, nonlinear sig-nal processing and time series prediction. Further, related methods are developed
for the modeling of dynamical systems nominally described by nonlinear differential
equations and to solve for static and dynamic response of Distributed Parameter Sys-
tems in an effcient manner. Finally, a hierarchical control allocation algorithm is
presented to solve the control allocation problem for highly over-actuated systems
that might arise with the development of embedded systems. The control allocation
algorithm makes use of the concept of distribution functions to keep in check the
"curse of dimensionality". The studies in the dissertation focus on demonstrating,
through analysis, simulation, and design, the applicability and feasibility of these ap-
proximation algorithms to a variety of examples. The results from these studies are
of direct utility in addressing the "curse of dimensionality" and frequent redundancy
of neural network approximation.
|
22 |
Multi-resolution Modeling of Dynamic Signal Control on Urban StreetsMassahi, Aidin 29 July 2017 (has links)
Dynamic signal control provides significant benefits in terms of travel time, travel time reliability, and other performance measures of transportation systems. The goal of this research is to develop and evaluate a methodology to support the planning for operations of dynamic signal control utilizing a multi-resolution analysis approach. The multi-resolution analysis modeling combines analysis, modeling, and simulation (AMS) tools to support the assessment of the impacts of dynamic traffic signal control.
Dynamic signal control strategies are effective in relieving congestions during non-typical days, such as those with high demands, incidents with different attributes, and adverse weather conditions. This research recognizes the need to model the impacts of dynamic signal controls for different days representing, different demand and incident levels. Methods are identified to calibrate the utilized tools for the patterns during different days based on demands and incident conditions utilizing combinations of real-world data with different levels of details. A significant challenge addressed in this study is to ensure that the mesoscopic simulation-based dynamic traffic assignment (DTA) models produces turning movement volumes at signalized intersections with sufficient accuracy for the purpose of the analysis. Although, an important aspect when modeling incident responsive signal control is to determine the capacity impacts of incidents considering the interaction between the drop in capacity below demands at the midblock urban street segment location and the upstream and downstream signalized intersection operations. A new model is developed to estimate the drop in capacity at the incident location by considering the downstream signal control queue spillback effects. A second model is developed to estimate the reduction in the upstream intersection capacity due to the drop in capacity at the midblock incident location as estimated by the first model. These developed models are used as part of a mesoscopic simulation-based DTA modeling to set the capacity during incident conditions, when such modeling is used to estimate the diversion during incidents. To supplement the DTA-based analysis, regression models are developed to estimate the diversion rate due to urban street incidents based on real-world data. These regression models are combined with the DTA model to estimate the volume at the incident location and alternative routes. The volumes with different demands and incident levels, resulting from DTA modeling are imported to a microscopic simulation model for more detailed analysis of dynamic signal control. The microscopic model shows that the implementation of special signal plans during incidents and different demand levels can improve mobility measures.
|
23 |
Visualisation et traitements interactifs de grilles régulières 3D haute-résolution virtualisées sur GPU. Application aux données biomédicales pour la microscopie virtuelle en environnement HPC. / Interactive visualisation and processing of high-resolution regular 3D grids virtualised on GPU. Application to biomedical data for virtual microscopy in HPC environment.Courilleau, Nicolas 29 August 2019 (has links)
La visualisation de données est un aspect important de la recherche scientifique dans de nombreux domaines.Elle permet d'aider à comprendre les phénomènes observés voire simulés et d'en extraire des informations à des fins notamment de validations expérimentales ou tout simplement pour de la revue de projet.Nous nous intéressons dans le cadre de cette étude doctorale à la visualisation de données volumiques en imagerie médicale et biomédicale, obtenues grâce à des appareils d'acquisition générant des champs scalaires ou vectoriels représentés sous forme de grilles régulières 3D.La taille croissante des données, due à la précision grandissante des appareils d'acquisition, impose d'adapter les algorithmes de visualisation afin de pouvoir gérer de telles volumétries.De plus, les GPUs utilisés en visualisation de données volumiques, se trouvant être particulièrement adaptés à ces problématiques, disposent d'une quantité de mémoire très limitée comparée aux données à visualiser.La question se pose alors de savoir comment dissocier les unités de calculs, permettant la visualisation, de celles de stockage.Les algorithmes se basant sur le principe dit "out-of-core" sont les solutions permettant de gérer de larges ensembles de données volumiques.Dans cette thèse, nous proposons un pipeline complet permettant de visualiser et de traiter, en temps réel sur GPU, des volumes de données dépassant très largement les capacités mémoires des CPU et GPU.L'intérêt de notre pipeline provient de son approche de gestion de données "out-of-core" permettant de virtualiser la mémoire qui se trouve être particulièrement adaptée aux données volumiques.De plus, cette approche repose sur une structure d'adressage virtuel entièrement gérée et maintenue sur GPU.Nous validons notre modèle grâce à plusieurs applications de visualisation et de traitement en temps réel.Tout d'abord, nous proposons un microscope virtuel interactif permettant la visualisation 3D auto-stéréoscopique de piles d'images haute résolution.Puis nous validons l'adaptabilité de notre structure à tous types de données grâce à un microscope virtuel multimodale.Enfin, nous démontrons les capacités multi-rôles de notre structure grâce à une application de visualisation et de traitement concourant en temps réel. / Data visualisation is an essential aspect of scientific research in many fields.It helps to understand observed or even simulated phenomena and to extract information from them for purposes such as experimental validations or solely for project review.The focus given in this thesis is on the visualisation of volume data in medical and biomedical imaging.The acquisition devices used to acquire the data generate scalar or vector fields represented in the form of regular 3D grids.The increasing accuracy of the acquisition devices implies an increasing size of the volume data.Therefore, it requires to adapt the visualisation algorithms in order to be able to manage such volumes.Moreover, visualisation mostly relies on the use of GPUs because they suit well to such problematics.However, they possess a very limited amount of memory compared to the generated volume data.The question then arises as to how to dissociate the calculation units, allowing visualisation, from those of storage.Algorithms based on the so-called "out-of-core" principle are the solutions for managing large volume data sets.In this thesis, we propose a complete GPU-based pipeline allowing real-time visualisation and processing of volume data that are significantly larger than the CPU and GPU memory capacities.The pipeline interest comes from its GPU-based approach of an out-of-core addressing structure, allowing the data virtualisation, which is adequate for volume data management.We validate our approach using different real-time applications of visualisation and processing.First, we propose an interactive virtual microscope allowing 3D auto-stereoscopic visualisation of stacks of high-resolution images.Then, we verify the adaptability of our structure to all data types with a multimodal virtual microscope.Finally, we demonstrate the multi-role capabilities of our structure through a concurrent real-time visualisation and processing application.
|
24 |
Advanced Topology Optimization Techniques for Engineering and Biomedical ProblemsPark, Jaejong January 2018 (has links)
No description available.
|
25 |
Multi-scale wavelet coherence with its applicationsWu, Haibo 11 April 2023 (has links)
The goal in this thesis is to develop a novel statistical approach to identity functional interactions between regions in a brain network. Wavelets are effective for capturing time varying properties of non-stationary signals because they have compact support that can be compressed or stretched according to the dynamic properties of the signal. Wavelets provide a multi-scale decomposition of signals and thus can be few for exploring potential cross-scale interactions between signals. To achieve this, we propose the scale-specific sub-processes of a multivariate locally stationary wavelet stochastic process. Under this proposed framework, a novel cross-scale dependence measurement is developed, which provides a measure for dependence structure of components at different scales of multivariate time series. Extensive simulation experiments are conducted to demonstrate that the theoretical properties hold in practice. The developed cross-scale analysis is performed on the electroencephalogram (EEG) data to study alterations in the functional connectivity structure in children diagnosed with attention deficit
hyperactivity disorder (ADHD). Our approach identified novel interesting cross-scale interactions between channels in the brain network. The proposed framework can be extended to other signals, which can also capture the statistical association between the stocks at different time scales.
|
26 |
Automatic Content-Based Temporal Alignment of Image Sequences with Varying Spatio-Temporal ResolutionOgden, Samuel R. 05 September 2012 (has links) (PDF)
Many applications use multiple cameras to simultaneously capture imagery of a scene from different vantage points on a rigid, moving camera system over time. Multiple cameras often provide unique viewing angles but also additional levels of detail of a scene at different spatio-temporal resolutions. However, in order to benefit from this added information the sources must be temporally aligned. As a result of cost and physical limitations it is often impractical to synchronize these sources via an external clock device. Most methods attempt synchronization through the recovery of a constant scale factor and offset with respect to time. This limits the generality of such alignment solutions. We present an unsupervised method that utilizes a content-based clustering mechanism in order to temporally align multiple non-synchronized image sequences of different and varying spatio-temporal resolutions. We show that the use of temporal constraints and dynamic programming adds robustness to changes in capture rates, field of view, and resolution.
|
27 |
Development of an MRM Federation System Using COTS SimulationsKim, Jaeho 01 January 2018 (has links)
The goal of this research is to build an experimental environment for the Simulation Interoperability Laboratory (SIL) of the University of Central Florida (UCF). The Simulation Interoperability Laboratory (SIL) is researching about multi-resolution modeling(MRM), with a focus on military field uses. This thesis proposes steps to develop an MRM federation system and build two different MRM systems using COTS simulations (SIMBox, VR-Forces, and MASA Sword). This report is written to provide the basis for a time-based MRM federation study in the Simulation Interoperability Laboratory. The report describes many definitions and notions related to Multi-Resolution Modeling(MRM) and discusses examples to make better understanding for further research. MRM is relatively new research, and there are high demands for integrating simulators running in military field purposes. Most military-related research is based on simulators currently being used in the military; this poses a problem for research because the data is classified, resulting in many limitations for outside researchers to see the military's process for building an MRM system or the results of the research. Therefore, development of the MRM federation using COTS simulations can provide many examples of MRM issues for future research.
|
28 |
Study of Dynamic Component SubstitutionsRao, Dhananjai M. 02 September 2003 (has links)
No description available.
|
29 |
Bivariate wavelet construction based on solutions of algebraic polynomial identitiesVan der Bijl, Rinske 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Multi-resolution analysis (MRA) has become a very popular eld of mathematical study
in the past two decades, being not only an area rich in applications but one that remains
lled with open problems. Building on the foundation of re nability of functions, MRA
seeks to lter through levels of ever-increasing detail components in data sets { a concept
enticing to an age where development of digital equipment (to name but one example)
needs to capture more and more information and then store this information in di erent
levels of detail. Except for designing digital objects such as animation movies, one of the
most recent popular research areas in which MRA is applied, is inpainting, where \lost"
data (in example, a photograph) is repaired by using boundary values of the data set
and \smudging" these values into the empty entries. Two main branches of application
in MRA are subdivision and wavelet analysis. The former uses re nable functions to
develop algorithms with which digital curves are created from a nite set of initial points
as input, the resulting curves (or drawings) of which possess certain levels of smoothness
(or, mathematically speaking, continuous derivatives). Wavelets on the other hand, yield
lters with which certain levels of detail components (or noise) can be edited out of a
data set. One of the greatest advantages when using wavelets, is that the detail data is
never lost, and the user can re-insert it to the original data set by merely applying the
wavelet algorithm in reverse. This opens up a wonderful application for wavelets, namely
that an existent data set can be edited by inserting detail components into it that were
never there, by also using such a wavelet algorithm. In the recent book by Chui and De Villiers (see [2]), algorithms for both subdivision and wavelet applications were developed
without using Fourier analysis as foundation, as have been done by researchers in earlier
years and which have left such algorithms unaccessible to end users such as computer
programmers. The fundamental result of Chapter 9 on wavelets of [2] was that feasibility
of wavelet decomposition is equivalent to the solvability of a certain set of identities
consisting of Laurent polynomials, referred to as Bezout identities, and it was shown how
such a system of identities can be solved in a systematic way. The work in [2] was done in
the univariate case only, and it will be the purpose of this thesis to develop similar results
in the bivariate case, where such a generalization is entirely non-trivial. After introducing
MRA in Chapter 1, as well as discussing the re nability of functions and introducing box
splines as prototype examples of functions that are re nable in the bivariate setting, our
fundamental result will also be that wavelet decomposition is equivalent to solving a set
of Bezout identities; this will be shown rigorously in Chapter 2. In Chapter 3, we give
a set of Laurent polynomials of shortest possible length satisfying the system of Bezout
identities in Chapter 2, for the particular case of the Courant hat function, which will
have been introduced as a linear box spline in Chapter 1. In Chapter 4, we investigate
an application of our result in Chapter 3 to bivariate interpolatory subdivision. With the
view to establish a general class of wavelets corresponding to the Courant hat function,
we proceed in the subsequent Chapters 5 { 8 to develop a general theory for solving the
Bezout identities of Chapter 2 separately, before suggesting strategies for reconciling these
solution classes in order to be a simultaneous solution of the system. / AFRIKAAANSE OPSOMMING: Multi-resolusie analise (MRA) het in die afgelope twee dekades toenemende gewildheid
geniet as 'n veld in wiskundige wetenskappe. Nie net is dit 'n area wat ryklik toepaslik
is nie, maar dit bevat ook steeds vele oop vraagstukke. MRA bou op die grondleggings
van verfynbare funksies en poog om deur vlakke van data-komponente te sorteer, of te
lter, 'n konsep wat aanloklik is in 'n era waar die ontwikkeling van digitale toestelle
(om maar 'n enkele voorbeeld te noem) sodanig moet wees dat meer en meer inligting
vasgel^e en gestoor moet word. Behalwe vir die ontwerp van digitale voorwerpe, soos
animasie- lms, word MRA ook toegepas in 'n mees vername navorsingsgebied genaamd
inverwing, waar \verlore" data (soos byvoorbeeld in 'n foto) herwin word deur data te
neem uit aangrensende gebiede en dit dan oor die le e data-dele te \smeer." Twee hooftakke
in toepassing van MRA is subdivisie en gol e-analise. Die eerste gebruik verfynbare
funksies om algoritmes te ontwikkel waarmee digitale krommes ontwerp kan word vanuit 'n
eindige aantal aanvanklike gegewe punte. Die verkrygde krommes (of sketse) kan voldoen
aan verlangde vlakke van gladheid (of verlangde grade van kontinue afgeleides, wiskundig
gesproke). Gol es word op hul beurt gebruik om lters te bou waarmee gewensde dataof
geraas-komponente verwyder kan word uit datastelle. Een van die grootste voordeel
van die gebruik van gol es bo ander soortgelyke instrumente om data lters mee te bou,
is dat die geraas-komponente wat uitgetrek word nooit verlore gaan nie, sodat die proses
omkeerbaar is deurdat die gebruiker die sodanige geraas-komponente in die groter datastel
kan terugbou deur die gol e-algoritme in trurat toe te pas. Hierdie eienskap van gol fies open 'n wonderlike toepassingsmoontlikheid daarvoor, naamlik dat 'n bestaande datastel
verander kan word deur data-komponente daartoe te voeg wat nooit daarin was nie,
deur so 'n gol e-algoritme te gebruik. In die onlangse boek deur Chui and De Villiers
(sien [2]) is algoritmes ontwikkel vir die toepassing van subdivisie sowel as gol es, sonder
om staat te maak op die grondlegging van Fourier-analise, soos wat die gebruik was in
vroe ere navorsing en waardeur algoritmes wat ontwikkel is minder e ektief was vir eindgebruikers.
Die fundamentele resultaat oor gol es in Hoofstuk 9 in [2], verduidelik hoe
suksesvolle gol e-ontbinding ekwivalent is aan die oplosbaarheid van 'n sekere versameling
van identiteite bestaande uit Laurent-polinome, bekend as Bezout-identiteite, en dit is
bewys hoedat sodanige stelsels van identiteite opgelos kan word in 'n sistematiese proses.
Die werk in [2] is gedoen in die eenveranderlike geval, en dit is die doelwit van hierdie
tesis om soortgelyke resultate te ontwikkel in die tweeveranderlike geval, waar sodanige
veralgemening absoluut nie-triviaal is. Nadat 'n inleiding tot MRA in Hoofstuk 1 aangebied
word, terwyl die verfynbaarheid van funksies, met boks-latfunksies as prototipes van
verfynbare funksies in die tweeveranderlike geval, bespreek word, word ons fundamentele
resultaat gegee en bewys in Hoofstuk 2, naamlik dat gol e-ontbinding in die tweeveranderlike
geval ook ekwivalent is aan die oplos van 'n sekere stelsel van Bezout-identiteite. In
Hoofstuk 3 word 'n versameling van Laurent-polinome van korste moontlike lengte gegee
as illustrasie van 'n oplossing van 'n sodanige stelsel van Bezout-identiteite in Hoofstuk 2,
vir die besondere geval van die Courant hoedfunksie, wat in Hoofstuk 1 gede nieer word.
In Hoofstuk 4 ondersoek ons 'n toepassing van die resultaat in Hoofstuk 3 tot tweeveranderlike
interpolerende subdivisie. Met die oog op die ontwikkeling van 'n algemene klas
van gol es verwant aan die Courant hoedfunksie, brei ons vervolglik in Hoofstukke 5 {
8 'n algemene teorie uit om die oplossing van die stelsel van Bezout-identiteite te ondersoek,
elke identiteit apart, waarna ons moontlike strategie e voorstel vir die versoening van
hierdie klasse van gelyktydige oplossings van die Bezout stelsel.
|
30 |
Multi-objective ROC learning for classificationClark, Andrew Robert James January 2011 (has links)
Receiver operating characteristic (ROC) curves are widely used for evaluating classifier performance, having been applied to e.g. signal detection, medical diagnostics and safety critical systems. They allow examination of the trade-offs between true and false positive rates as misclassification costs are varied. Examination of the resulting graphs and calcu- lation of the area under the ROC curve (AUC) allows assessment of how well a classifier is able to separate two classes and allows selection of an operating point with full knowledge of the available trade-offs. In this thesis a multi-objective evolutionary algorithm (MOEA) is used to find clas- sifiers whose ROC graph locations are Pareto optimal. The Relevance Vector Machine (RVM) is a state-of-the-art classifier that produces sparse Bayesian models, but is unfor- tunately prone to overfitting. Using the MOEA, hyper-parameters for RVM classifiers are set, optimising them not only in terms of true and false positive rates but also a novel measure of RVM complexity, thus encouraging sparseness, and producing approximations to the Pareto front. Several methods for regularising the RVM during the MOEA train- ing process are examined and their performance evaluated on a number of benchmark datasets demonstrating they possess the capability to avoid overfitting whilst producing performance equivalent to that of the maximum likelihood trained RVM. A common task in bioinformatics is to identify genes associated with various genetic conditions by finding those genes useful for classifying a condition against a baseline. Typ- ically, datasets contain large numbers of gene expressions measured in relatively few sub- jects. As a result of the high dimensionality and sparsity of examples, it can be very easy to find classifiers with near perfect training accuracies but which have poor generalisation capability. Additionally, depending on the condition and treatment involved, evaluation over a range of costs will often be desirable. An MOEA is used to identify genes for clas- sification by simultaneously maximising the area under the ROC curve whilst minimising model complexity. This method is illustrated on a number of well-studied datasets and ap- plied to a recent bioinformatics database resulting from the current InChianti population study. Many classifiers produce “hard”, non-probabilistic classifications and are trained to find a single set of parameters, whose values are inevitably uncertain due to limited available training data. In a Bayesian framework it is possible to ameliorate the effects of this parameter uncertainty by averaging over classifiers weighted by their posterior probabil- ity. Unfortunately, the required posterior probability is not readily computed for hard classifiers. In this thesis an Approximate Bayesian Computation Markov Chain Monte Carlo algorithm is used to sample model parameters for a hard classifier using the AUC as a measure of performance. The ability to produce ROC curves close to the Bayes op- timal ROC curve is demonstrated on a synthetic dataset. Due to the large numbers of sampled parametrisations, averaging over them when rapid classification is needed may be impractical and thus methods for producing sparse weightings are investigated.
|
Page generated in 0.0877 seconds