Spelling suggestions: "subject:"attern analysis"" "subject:"battern analysis""
11 |
The role of attention in conscious access mechanisms and their influence on visual representation : evidence from psychophysics and fMRI / Le rôle de l'attention dans les mécanismes d'accès conscient et son influence sur la représentation visuelle : études en psychophysique et IRMfThibault, Louis 28 November 2016 (has links)
Un des principaux résultats de l'étude scientifique de la conscience concerne l'existence de deux phases distinctes du traitement visuel. La première est caractérisée par une propagation antérograde de l'activité évoquée par le cortex visuel primaire et n'est pas typiquement associée à une perception consciente. La seconde, en revanche, est souvent citée comme un corrélat neuronal de la conscience, et implique une réactivation du cortex visuel précoce par le cortex parieto-frontal. Cette dichotomie soulève plusieurs questions : premièrement, quelle est l'origine de ce phénomène de feedback, et deuxièmement, qu'est-ce qui distingue un stimulus ayant subi ce traitement supplémentaire du stimulus n'ayant pas provoqué une telle réactivation ? Au jour d'aujourd'hui, deux grandes théories ont été proposées. La première, que nous appelons la théorie "précoce et locale" pose l'hypothèse que l'accès conscient émerge lorsque la boucle de réactivation sensorielle est établie. Ceci implique que seuls les stimuli présentant une qualité hautement saillante dès leur apparition peuvent accéder à un traitement conscient, et de ce fait, que le rapport subjectif d'un stimulus dépend uniquement de l'activité locale du cortex sensoriel. La théorie "tardive et globale", par contraste, propose que la perception consciente est le résultat d'un routage informationnel à travers un réseau cortico-cortical distribué appelé le Global Neuronal Workspace (GNW). Cette théorie suggère que l'information sensorielle provenant de diverses régions corticales accède à cette infrastructure de routage par le biais d'un processus sélectif : l'attention. En 2013, Sergent et collègues ont testé l'une des prédictions dérivées de cette seconde théorie : en principe, un stimulus dont l'observateur n'a pas pris conscience peut néanmoins accéder au GNW suite à un amorçage attentionnel. Afin de tester cette prédiction, des participants humains ont visionné un stimulus placé au seuil perceptif ainsi qu'une amorce attentionnelle qui pouvait soit attirer l'attention vers la position de la cible, soit du coté opposé. Ces participants discriminaient plus finement les caractéristiques de la cible au sein des essais pour lesquels l'amorce dirigeait l'attention vers la position préalable de la cible, ce qui suggère que l'intervention rétrospective de l'attention déclenche un accès conscient pour des faibles traces mnésiques qui ne seraient normalement pas traitées par le GNW. Nous présentons des données de modélisation psychophysique ainsi que des données d'imagerie fonctionnelle qui suggèrent que l'attention joue un rôle causal dans l'émergence d'un percept conscient, et qui offrent des indices quand à la structure des représentations perceptuelles au sein du cortex sensoriel primaire. / A major finding in the scientific study of conscious perception has been the existence of two temporally-distinct phases of visual processing. The first, characterized by the feed-forward propagation of evoked activity in early visual cortex, is not typically associated with conscious perception. The second phase involves a reactivation of early sensory cortex by downstream regions and is often cited as a correlate -- if not a proximal cause -- of consciousness. This raises a few crucial questions: firstly, what causes this feedback process to emerge, and secondly, what distinguishes a stimulus representation that has undergone such feedback processing from one that has not ? At the time of writing, two competing theories have been proposed. The first theory, hitherto referred to as "early-and-local", posits that conscious access emerges from the very emergence of a feedback loop between high-level sensory cortex and its primary counterpart, and that this cortical resonance is driven entirely by upstream activations along the feed-forward chain. This implies that only those stimuli that exhibit high salience from the onset can become conscious, and by extension, that the stimulus' reportability is governed entirely by early evoked activity in primary sensory cortex. "Late-and-global" theory, by contrast, posits that conscious perception is the direct result of routing of information through a distributed cortico-cortical network called the Global Neuronal Workspace (hereafter GNW). By this account, visual information in various local cortical regions is given access to routing infrastructure by some selective process, namely attention. In 2013, Sergent and colleagues tested a prediction derived from this second model: that an arbitrary sensory representation that has initially failed to become conscious can be hooked into the GNW by means of an attentional manipulation. To do this, a low threshold target Gabor patch was presented, followed by an extrinsic cue either at the location in which the Gabor had been presented, or on the opposite side of the screen. Subjects were better at discriminating the orientation of the Gabor in trials where the cue had been presented on the same side as the target, and also reported seeing the target more clearly, suggesting that the retrospective intervention of attention was enabling a weak signal to gain access to the global neuronal workspace. We present data from psychophysical modeling and functional magnetic resonance imaging that point to a causal role for attention in the emergence of a conscious percept, with implications for the structure of perceptual representations in early sensory cortex.
|
12 |
Effects of Hydraulic Dredging and Vessel Operation on Atlantic Sturgeon Behavior in a Large Coastal RiverBarber, Michael R 01 January 2017 (has links)
The tidal James River, a focus of VCU's Atlantic Sturgeon program, supports both commercial shipping and hydraulic dredging. These anthropogenic threats present documented but preventable sources of mortality to the endangered species. Using three separate VEMCO Positioning System (VPS) receiver arrays, spatial data of previously-tagged fish were collected. ArcGIS and Programita software were used to analyze fish spatial distributions in the presence and absence of potential threats, using additional data including automatic identification system (AIS) vessel locations, vessel passages compiled using camera footage, and dredge records provided by the US Army Corps of Engineers. The data showed a change in distribution associated with vessels that varied according to river width but not vessel type. Dredging was associated with differences in spatial distribution, but more clearly for adults than sub-adults. The responses of Atlantic Sturgeon provide information necessary to propose potential threat mitigations, including seasonal restrictions for both vessels and dredging.
|
13 |
Evaluation of Body Position Measurement and Analysis using Kinect : at the example of golf swingsElm, Andreas January 2014 (has links)
Modern motion capturing technologies are capable of collecting quantitative, biomechanical data on golf swings that can help to improve our understanding of golf theory and facilitate the establishing of new, optimized swing paradigms.This study explored the possibility of utilizing Microsoft’s Kinect sensor to analyse the biomechanics of golf swings. Following design-science research principles, it presents a software prototype capable of capturing, recording, analysing and comparing movement patterns using three-dimensional vector angles. The tracking accuracy and data validity of the software were then evaluated in a set of experiments in optimal and real-world conditions using actual golf swing recordings.The results indicate that the software is providing accurate data on joint vector angles with a clear profile view, while visually occluded and frontal angles are more difficult to determine precisely. The employed position detection algorithm demonstrated good results in both optimal and real-world environments. Overall, the presented software and its approach to position analysis and detection show great potential for use in further research efforts. / Program: Magisterutbildning i informatik
|
14 |
The role of cortical oscillations in the control and protection of visual working memoryMyers, Nicholas January 2015 (has links)
Visual working memory (WM) is the ability to hold information in mind for a short time before acting on it. The capacity of WM is strikingly limited. To make the most of this precious resource, humans exhibit a high degree of cognitive flexibility: We can prioritize information that is relevant to behavior, and inhibit unnecessary distractions. This thesis examines some behavioral and neural correlates of flexibility in WM. When information is of particular importance, anticipatory attention can be directed to where it will likely appear. Oscillations in visual cortex, in the 10-Hz range, play an important role in regulating excitability of such prioritized locations. Chapter 4 describes how even spontaneous fluctuations in 10-Hz synchronization (measured by electroencephalography, EEG) before encoding influence WM. Chapters 2 and 3 describe how attention can be directed retrospectively to items even if they are already stored in WM. Chapter 3 discusses how retrospective cues change neural synchronization similarly to anticipatory cues. Behavioral and neural measures additionally indicate that the boosting of an item through retrospective cues does not require prolonged deployment of attention: rather, it may be a transient process. The second half of this thesis additionally examines how items are represented in visual WM. Chapter 5 summarizes a study using pattern analysis of magnetoencephalographic (MEG) and EEG data to decode features of visual templates stored in WM. Decoding appears transiently around the time when potential target stimuli are expected, in line with a flexible reactivation mechanism. Chapter 6 further examines separate cortical networks involved in protecting vs. updating items in WM, and tests whether task relevance changes how well WM contents can be decoded. Finally, Chapter 7 summarizes the thesis and discusses how attentional flexibility can merge WM with a wider range of sources of behavioral control.
|
15 |
Voting-Based Consensus of Data PartitionsAyad, Hanan 08 1900 (has links)
Over the past few years, there has been a renewed interest in the consensus
problem for ensembles of partitions. Recent work is primarily motivated by the
developments in the area of combining multiple supervised learners. Unlike the
consensus of supervised classifications, the consensus of data partitions is a
challenging problem due to the lack of globally defined cluster labels and to
the inherent difficulty of data clustering as an unsupervised learning problem.
Moreover, the true number of clusters may be unknown. A fundamental goal of
consensus methods for partitions is to obtain an optimal summary of an ensemble
and to discover a cluster structure with accuracy and robustness exceeding those
of the individual ensemble partitions.
The quality of the consensus partitions highly depends on the ensemble
generation mechanism and on the suitability of the consensus method for
combining the generated ensemble. Typically, consensus methods derive an
ensemble representation that is used as the basis for extracting the consensus
partition. Most ensemble representations circumvent the labeling problem. On
the other hand, voting-based methods establish direct parallels with consensus
methods for supervised classifications, by seeking an optimal relabeling of the
ensemble partitions and deriving an ensemble representation consisting of a
central aggregated partition. An important element of the voting-based
aggregation problem is the pairwise relabeling of an ensemble partition with
respect to a representative partition of the ensemble, which is refered to here
as the voting problem. The voting problem is commonly formulated as a weighted
bipartite matching problem.
In this dissertation, a general theoretical framework for the voting problem as
a multi-response regression problem is proposed. The problem is formulated as
seeking to estimate the uncertainties associated with the assignments of the
objects to the representative clusters, given their assignments to the clusters
of an ensemble partition. A new voting scheme, referred to as cumulative voting,
is derived as a special instance of the proposed regression formulation
corresponding to fitting a linear model by least squares estimation. The
proposed formulation reveals the close relationships between the underlying loss
functions of the cumulative voting and bipartite matching schemes. A useful
feature of the proposed framework is that it can be applied to model substantial
variability between partitions, such as a variable number of clusters.
A general aggregation algorithm with variants corresponding to
cumulative voting and bipartite matching is applied and a simulation-based
analysis is presented to compare the suitability of each scheme to different
ensemble generation mechanisms. The bipartite matching is found to be more
suitable than cumulative voting for a particular generation model, whereby each
ensemble partition is generated as a noisy permutation of an underlying
labeling, according to a probability of error. For ensembles with a variable
number of clusters, it is proposed that the aggregated partition be viewed as an
estimated distributional representation of the ensemble, on the basis of which,
a criterion may be defined to seek an optimally compressed consensus partition.
The properties and features of the proposed cumulative voting scheme are
studied. In particular, the relationship between cumulative voting and the
well-known co-association matrix is highlighted. Furthermore, an adaptive
aggregation algorithm that is suited for the cumulative voting scheme is
proposed. The algorithm aims at selecting the initial reference partition and
the aggregation sequence of the ensemble partitions the loss of mutual
information associated with the aggregated partition is minimized. In order to
subsequently extract the final consensus partition, an efficient agglomerative
algorithm is developed. The algorithm merges the aggregated clusters such that
the maximum amount of information is preserved. Furthermore, it allows the
optimal number of consensus clusters to be estimated.
An empirical study using several artificial and real-world datasets demonstrates
that the proposed cumulative voting scheme leads to discovering substantially
more accurate consensus partitions compared to bipartite matching, in the case
of ensembles with a relatively large or a variable number of clusters. Compared
to other recent consensus methods, the proposed method is found to be comparable
with or better than the best performing methods. Moreover, accurate estimates of
the true number of clusters are often achieved using cumulative voting, whereas
consistently poor estimates are achieved based on bipartite matching. The
empirical evidence demonstrates that the bipartite matching scheme is not
suitable for these types of ensembles.
|
16 |
Voting-Based Consensus of Data PartitionsAyad, Hanan 08 1900 (has links)
Over the past few years, there has been a renewed interest in the consensus
problem for ensembles of partitions. Recent work is primarily motivated by the
developments in the area of combining multiple supervised learners. Unlike the
consensus of supervised classifications, the consensus of data partitions is a
challenging problem due to the lack of globally defined cluster labels and to
the inherent difficulty of data clustering as an unsupervised learning problem.
Moreover, the true number of clusters may be unknown. A fundamental goal of
consensus methods for partitions is to obtain an optimal summary of an ensemble
and to discover a cluster structure with accuracy and robustness exceeding those
of the individual ensemble partitions.
The quality of the consensus partitions highly depends on the ensemble
generation mechanism and on the suitability of the consensus method for
combining the generated ensemble. Typically, consensus methods derive an
ensemble representation that is used as the basis for extracting the consensus
partition. Most ensemble representations circumvent the labeling problem. On
the other hand, voting-based methods establish direct parallels with consensus
methods for supervised classifications, by seeking an optimal relabeling of the
ensemble partitions and deriving an ensemble representation consisting of a
central aggregated partition. An important element of the voting-based
aggregation problem is the pairwise relabeling of an ensemble partition with
respect to a representative partition of the ensemble, which is refered to here
as the voting problem. The voting problem is commonly formulated as a weighted
bipartite matching problem.
In this dissertation, a general theoretical framework for the voting problem as
a multi-response regression problem is proposed. The problem is formulated as
seeking to estimate the uncertainties associated with the assignments of the
objects to the representative clusters, given their assignments to the clusters
of an ensemble partition. A new voting scheme, referred to as cumulative voting,
is derived as a special instance of the proposed regression formulation
corresponding to fitting a linear model by least squares estimation. The
proposed formulation reveals the close relationships between the underlying loss
functions of the cumulative voting and bipartite matching schemes. A useful
feature of the proposed framework is that it can be applied to model substantial
variability between partitions, such as a variable number of clusters.
A general aggregation algorithm with variants corresponding to
cumulative voting and bipartite matching is applied and a simulation-based
analysis is presented to compare the suitability of each scheme to different
ensemble generation mechanisms. The bipartite matching is found to be more
suitable than cumulative voting for a particular generation model, whereby each
ensemble partition is generated as a noisy permutation of an underlying
labeling, according to a probability of error. For ensembles with a variable
number of clusters, it is proposed that the aggregated partition be viewed as an
estimated distributional representation of the ensemble, on the basis of which,
a criterion may be defined to seek an optimally compressed consensus partition.
The properties and features of the proposed cumulative voting scheme are
studied. In particular, the relationship between cumulative voting and the
well-known co-association matrix is highlighted. Furthermore, an adaptive
aggregation algorithm that is suited for the cumulative voting scheme is
proposed. The algorithm aims at selecting the initial reference partition and
the aggregation sequence of the ensemble partitions the loss of mutual
information associated with the aggregated partition is minimized. In order to
subsequently extract the final consensus partition, an efficient agglomerative
algorithm is developed. The algorithm merges the aggregated clusters such that
the maximum amount of information is preserved. Furthermore, it allows the
optimal number of consensus clusters to be estimated.
An empirical study using several artificial and real-world datasets demonstrates
that the proposed cumulative voting scheme leads to discovering substantially
more accurate consensus partitions compared to bipartite matching, in the case
of ensembles with a relatively large or a variable number of clusters. Compared
to other recent consensus methods, the proposed method is found to be comparable
with or better than the best performing methods. Moreover, accurate estimates of
the true number of clusters are often achieved using cumulative voting, whereas
consistently poor estimates are achieved based on bipartite matching. The
empirical evidence demonstrates that the bipartite matching scheme is not
suitable for these types of ensembles.
|
17 |
Retinal Image Analysis and its use in Medical ApplicationsZhang, Yibo (Bob) 19 April 2011 (has links)
Retina located in the back of the eye is not only a vital part of human sight, but also contains valuable information that can be used in biometric security applications, or for the diagnosis of certain diseases. In order to analyze this information from retinal images, its features of blood vessels, microaneurysms and the optic disc require extraction and detection respectively.
We propose a method to extract vessels called MF-FDOG. MF-FDOG consists of using two filters, Matched Filter (MF) and the first-order derivative of Gaussian (FDOG). The vessel map is extracted by applying a threshold to the response of MF, which is adaptively adjusted by the mean response of FDOG. This method allows us to better distinguish vessel objects from non-vessel objects.
Microaneurysm (MA) detection is accomplished with two proposed algorithms, Multi-scale Correlation Filtering (MSCF) and Dictionary Learning (DL) with Sparse Representation Classifier (SRC). MSCF is hierarchical in nature, consisting of two levels: coarse level microaneurysm candidate detection and fine level true microaneurysm detection. In the first level, all possible microaneurysm candidates are found while the second level extracts features from each candidate and compares them to a discrimination table for decision (MA or non-MA). In Dictionary Learning with Sparse Representation Classifier, MA and non-MA objects are extracted from images and used to learn two dictionaries, MA and non-MA. Sparse Representation Classifier is then applied to each MA candidate object detected beforehand, using the two dictionaries to determine class membership. The detection result is further improved by adding a class discrimination term into the Dictionary Learning model. This approach is known as Centralized Dictionary Learning (CDL) with Sparse Representation Classifier.
The optic disc (OD) is an important anatomical feature in retinal images, and its detection is vital for developing automated screening programs. Currently, there is no algorithm designed to automatically detect the OD in fundus images captured from Asians, which are larger and have thicker vessels compared to Caucasians. We propose such a method to complement current algorithms using two steps: OD vessel candidate detection and OD vessel candidate matching.
The proposed extraction/detection approaches are tested in medical applications, specifically the case study of detecting diabetic retinopathy (DR). DR is a complication of diabetes that damages the retina and can lead to blindness. There are four stages of DR and is a leading cause of sight loss in industrialized nations. Using MF-FDOG, blood vessels were extracted from DR images, while DR images fed into MSCF and Dictionary and Centralized Dictionary Learning with Sparse Representation Classifier produced good microaneurysm detection results. Using a new database consisting of only Asian DR patients, we successfully tested our OD detection method. As part of future work we intend to improve existing methods such as enhancing low contrast microaneurysms and better scale selection. In additional, we will extract other features from the retina, develop a generalized OD detection method, apply Dictionary Learning with Sparse Representation Classifier to vessel extraction, and use the new image database to carry out more experiments in medical applications.
|
18 |
Electromagnetic Induction for Improved Target Location and Segregation Using Spatial Point Pattern Analysis with Applications to Historic Battlegrounds and UXO RemediationPierce, Carl J. 2010 August 1900 (has links)
Remediation of unexploded ordnance (UXO) and prioritization of excavation procedures for archaeological artifacts using electromagnetic (EM) induction are studied in this dissertation. Lowering of the false alarm rates that require excavation and artifact excavation prioritization can reduce the costs associated with unnecessary procedures.
Data were taken over 5 areas at the San Jacinto Battleground near Houston, Texas, using an EM-63 metal detection instrument. The areas were selected using the archaeological concepts of cultural and natural formation processes applied to what is thought to be areas that were involved in the 1836 Battle of San Jacinto.
Innovative use of a Statistical Point Pattern Analysis (PPA) is employed to identify clustering of EM anomalies. The K-function uses point {x,y} data to look for possible clusters in relation to other points in the data set. The clusters once identified using K-function will be further examined for classification and prioritization using the Weighted K-function. The Weighted K-function uses a third variable such as millivolt values or time decay to aid in segregation and prioritization of anomalies present.
Once the anomalies of interest are identified, their locations are determined using the Gi-Statistics Technique. The Gi*-Statistic uses the individual Cartesian{x, y} points as origin locations to establish a range of distances to other cluster points in the data set. The segregation and location of anomalies supplied by this analysis will have several benefits. Prioritization of excavations will narrow down what areas should be excavated first. Anomalies of interest can be located to guide excavation procedures within the areas surveyed.
Knowing what anomalies are of greater importance than others will help to lower false alarm rates for UXO remediation or for archaeological artifact selection. Knowing significant anomaly location will reduce the number of excavations which will subsequently save time and money. The procedures and analyses presented here are an interdisciplinary compilation of geophysics, archaeology and statistical analysis brought together for the first time to examine problems associated with UXO remediation as well as archaeological artifact selection at historic battlegrounds using electromagnetic data.
|
19 |
An Efficient Bitmap-Based Approach to Mining Sequential Patterns for Large DatabasesWu, Chien-Hui 29 July 2004 (has links)
The task of Data Mining is to find the useful information within the incredible sets of data. One of important research areas of Data Mining is Mining Sequential Patterns. For a transaction database, sequential pattern means that there are some relations between the items bought by customers in a period of time. If we can find these relations by mining sequential patterns, we can provide better selling strategy to gain more customers' attentions. However, since the transaction database contains a lot of data, and it will be scanned during the mining process again and again, to improve the running efficiency is an important topic. In the GSP algorithm proposed by Srikant and Agrawal, they use a complex data structure to store and generate candidates. The generated candidates satisfy a property, ``the subsets of a frequent itemset are also frequent'. The property leads to fewer number of candidates; however, it still spends too much time to counting candidates. In the SPAM algorithm proposed by Aryes et al., they use the bitwise operations to reduce the time for counting candidates. However, it generates too many candidates which will never become frequent itemsets, which decreases the efficiency. In this thesis, we proposed a new bitmap-based algorithm. By modifying the way to generate candidates in the GSP algorithm and applying the bitwise operations in the SPAM algorithm, the proposed algorithm can mine sequential patterns efficiently. That is, we use the similar candidate generation method presented in the GSP algorithm to reduce the number of candidates and the similar counting method proposed in the SPAM algorithm to reduce the time of counting candidates. In the proposed algorithm, we classify the itemsets into two cases, simultaneous occurrence (noted as AB) and sequential occurrence (noted as A-> B). In the case of simultaneous occurrence, the number of candidate is C(n,k) based on the exhausted method. In order to prevent too many candidates generated, we make use of the property, ``the subsets of a frequent itemset are also frequent', to reduce the number of candidates from C(n,k) to C(y,k), k <= y < n. In the case of sequential occurrence, the candidates are generated by using a special join operation which could combine, for example, A->B and B->C to A->B->C. Moreover, we have to consider two other cases: (1) combing A->B and A->C to A->BC; (2) combing A->C and B->C to AB->C. The method of counting candidates is similar to the SPAM algorithm (i.e., bitwise operations). From our simulation results, based on the same bit representation for the transaction database, we show that our proposed algorithm could provide better performance than the SPAM algorithm in terms of the processing time, since our algorithm could generate fewer number of candidates than the SPAM algorithm.
|
20 |
Machine Vision and Autonomous Integration Into an Unmanned Aircraft SystemVan Horne, Chris 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / The University of Arizona's Aerial Robotics Club (ARC) sponsors the development of an unmanned aerial vehicle (UAV) able to compete in the annual Association for Unmanned Vehicle Systems International (AUVSI) Seafarer Chapter Student Unmanned Aerial Systems competition. Modern programming frameworks are utilized to develop a robust distributed imagery and telemetry pipeline as a backend for a mission operator user interface. This paper discusses the design changes made for the 2013 AUVSI competition including integrating low-latency first-person view, updates to the distributed task backend, and incremental and asynchronous updates the operator's user interface for real-time data analysis.
|
Page generated in 0.077 seconds