Spelling suggestions: "subject:"human vision "" "subject:"suman vision ""
21 |
Source-Space Analyses in MEG/EEG and Applications to Explore Spatio-temporal Neural Dynamics in Human VisionYang, Ying 01 February 2017 (has links)
Human cognition involves dynamic neural activities in distributed brain areas. For studying such neural mechanisms, magnetoencephalography (MEG) and electroencephalography (EEG) are two important techniques, as they non-invasively detect neural activities with a high temporal resolution. Recordings by MEG/EEG sensors can be approximated as a linear transformation of the neural activities in the brain space (i.e., the source space). However, we only have a limited number sensors compared with the many possible locations in the brain space; therefore it is challenging to estimate the source neural activities from the sensor recordings, in that we need to solve the underdetermined inverse problem of the linear transformation. Moreover, estimating source activities is typically an intermediate step, whereas the ultimate goal is to understand what information is coded and how information flows in the brain. This requires further statistical analysis of source activities. For example, to study what information is coded in different brain regions and temporal stages, we often regress neural activities on some external covariates; to study dynamic interactions between brain regions, we often quantify the statistical dependence among the activities in those regions through “connectivity” analysis. Traditionally, these analyses are done in two steps: Step 1, solve the linear problem under some regularization or prior assumptions, (e.g., each source location being independent); Step 2, do the regression or connectivity analysis. However, biases induced in the regularization in Step 1 can not be adapted in Step 2 and thus may yield inaccurate regression or connectivity results. To tackle this issue, we present novel one-step methods of regression or connectivity analysis in the source space, where we explicitly modeled the dependence of source activities on the external covariates (in the regression analysis) or the cross-region dependence (in the connectivity analysis), jointly with the source-to-sensor linear transformation. In simulations, we observed better performance by our models than by commonly used two-step approaches, when our model assumptions are reasonably satisfied. Besides the methodological contribution, we also applied our methods in a real MEG/EEG experiment, studying the spatio-temporal neural dynamics in the visual cortex. The human visual cortex is hypothesized to have a hierarchical organization, where low-level regions extract low-level features such as local edges, and high-level regions extract semantic features such as object categories. However, details about the spatio-temporal dynamics are less understood. Here, using both the two-step and our one-step regression models in the source space, we correlated neural responses to naturalistic scene images with the low-level and high-level features extracted from a well-trained convolutional neural network. Additionally, we also studied the interaction between regions along the hierarchy using the two-step and our one-step connectivity models. The results from the two-step and the one-step methods were generally consistent; however, the one-step methods demonstrated some intriguing advantages in the regression analysis, and slightly different patterns in the connectivity analysis. In the consistent results, we not only observed an early-to-late shift from low-level to high-level features, which support feedforward information flow along the hierarchy, but also some novel evidence indicating non-feedforward information flow (e.g., topdown feedback). These results can help us better understand the neural computation in the visual cortex. Finally, we compared the empirical sensitivity between MEG and EEG in this experiment, in detecting dependence between neural responses and visual features. Our results show that the less costly EEG was able to achieve comparable sensitivity with that in MEG when the number of observations was about twice of that in MEG. These results can help researchers empirically choose between MEG and EEG when planning their experiments with limited budgets.
|
22 |
Understanding the potentiation and malleability of population activity in response to absolute and relative stimulus dimensions within the human visual cortexVinke, Louis Nicholas 28 March 2021 (has links)
The human visual system is tasked with transforming variations in light within our environment into a coherent percept, typically described using properties such as luminance and contrast. The experiments described in this dissertation examine how the human visual cortex responds to each of these stimulus properties at the population-level, and explores the degree to which contrast adaptation can alter these response properties. The first set of experiments (Chapter 2) demonstrate how saturating sigmoidal contrast response functions can be captured using human fMRI by leveraging sustained contrast adaptation to reduce the heterogeneity of response profiles across neural populations. The results obtained using this methodology have the potential to rectify the qualitatively different findings reported across visual neuroscience, when comparing electrophysiological and population-based neuroimaging measures. The second set of experiments (Chapter 3) demonstrate how under certain conditions a well-established visuocortical response property, contrast response, can also reflect luminance encoding, challenging the idea that luminance information plays no significant role in supporting visual perception. Specifically, these results show that the mean luminance information of a visual signal persists within visuocortical representations, even after controlling for pupillary dynamics, and potentially reflects an inherent imbalance of excitatory and inhibitory components. The final set of experiments (Chapter 4) examine how the time course of population activity during initial periods of adaptation differs across seemingly slightly different adapter conditions. The degree to which stimulus adapter orientation bias (radial vs. concentric orientation) or stimulus adapter luminance (2409 cd/m2 vs. 757.3 cd/m2) can alter adaptation time course dynamics is examined in detail, as well as investigating the prevalence of any retinotopic bias. In an effort to coalesce the findings across all three chapters, the shape and efficacy of the initial adaptation time course is ultimately compared against the contrast and luminance response function parameters reported in previous chapters. As a whole, the findings reported in this dissertation challenge some common assumptions about how the early human visual cortex adjusts and responds to the environment, provide methodological tools and stimulus design caveats vision neuroscientists will need to consider, and play a significant role in cortical models of vision.
|
23 |
The use of Silent Substitution in measuring isolated cone- and rod- Human ERGsKommanapalli, Deepika January 2018 (has links)
After over a decade of its discovery, the Electroretinogram (ERG) still remains
the objective tool that is conventionally used in assessment of retinal function in
health and disease. Although there is ongoing research in developing ERG recording techniques, interpretation and clinical applications, there is still a limited
understanding on how each photoreceptor class contribute to the ERG waveform
and their role and/or susceptibilities in various retinal diseases still remains
unclear. Another limitation with currently used conventional testing protocols in a
clinical setting is the requirement of an adaptation period which is time consuming.
Furthermore, the ERG responses derived in this manner are recorded under different stimulus conditions, thus, making comparison of these signals difficult. To address these issues and develop a new testing method, we employed silent substitution paradigm in obtaining cone- and rod- isolating ERGs
using sine- and square- wave temporal profiles. The ERGs achieved in this
manner were shown to be photoreceptor-selective. Furthermore, these
responses did not only provide the functional index of photoreceptors but their
contributions to their successive postreceptoral pathways. We believe that the
substitution stimuli used in this thesis could be a valuable tool in functional
assessment of individual photoreceptor classes in normal and pathological conditions. Furthermore, we speculate that this method of cone/rod activity isolation could possibly be used in developing faster and efficient photoreceptor-selective testing protocols without the need of adaptation. / Bradford School of Optometry and Vision Sciences scholarship
|
24 |
Understanding Human Imagination Through Diffusion ModelPham, Minh Nguyen 22 December 2023 (has links)
This paper develops a possible explanation for a facet of visual processing inspired by the biological brain's mechanisms for information gathering. The primary focus is on how humans observe elements in their environment and reconstruct visual information within the brain. Drawing on insights from diverse studies, personal research, and biological evidence, the study posits that the human brain captures high-level feature information from objects rather than replicating exact visual details, as is the case in digital systems. Subsequently, the brain can either reconstruct the original object using its specific features or generate an entirely new object by combining features from different objects, a process referred to as "Imagination." Central to this process is the "Imagination Core," a dedicated unit housing a modified diffusion model. This model allows high-level features of an object to be employed for tasks like recreating the original object or forming entirely new objects from existing features. The experimental simulation, conducted with an Artificial Neural Network (ANN) incorporating a Convolutional Neural Network (CNN) for high-level feature extraction within the Information Processing Network and a Diffusion Network for generating new information in the Imagination Core, demonstrated the ability to create novel images based solely on high-level features extracted from previously learned images. This experimental outcome substantiates the theory that human learning and storage of visual information occur through high-level features, enabling us to recall events accurately, and these details are instrumental in our imaginative processes. / Master of Science / This study takes inspiration from how our brains process visual information to explore how we see and imagine things. Think of it like a digital camera, but instead of saving every tiny detail, our brains capture the main features of what we see. These features are then used to recreate images or even form entirely new ones through a process called "Imagination." It is like when you remember something from the past – your brain does not store every little detail but retains enough to help you recall events and create new ideas.
In our study, we created a special unit called the "Imagination Core," using a modified diffusion model, to simulate how this process works. We trained an Artificial Neural Network (ANN) with a Convolutional Neural Network (CNN) to extract the main features of objects and a Diffusion Network to generate new information in the Imagination Core. The exciting part? We were able to make the computer generate new images it had never seen before, only using details it learned from previous images. This supports the idea that, like our brains, focusing on important details helps us remember things and fuels our ability to imagine new things.
|
25 |
Satellite Image Processing with Biologically-inspired Computational Methods and Visual AttentionSina, Md Ibne 27 July 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
|
26 |
Satellite Image Processing with Biologically-inspired Computational Methods and Visual AttentionSina, Md Ibne 27 July 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
|
27 |
Mobility enhancement using simulated artificial human visionDowling, Jason Anthony January 2007 (has links)
The electrical stimulation of appropriate components of the human visual system can result in the perception of blobs of light (or phosphenes) in totally blind patients. By stimulating an array of closely aligned electrodes it is possible for a patient to perceive very low-resolution images from spatially aligned phosphenes. Using this approach, a number of international research groups are working toward developing multiple electrode systems (called Artificial Human Vision (AHV) systems or visual prostheses) to provide a phosphene-based substitute for normal human vision. Despite the great promise, there are currently a number of constraints with current AHV systems. These include limitations in the number of electrodes which can be implanted and the perceived spatial layout and display frequency of phosphenes. Therefore the development of computer vision techniques that can maximise the visualisation value of the limited number of phosphenes would be useful in compensating for these constraints. The lack of an objective method for comparing different AHV system displays, in addition to comparing AHV systems and other blind mobility aids (such as the long cane), has been a significant problem for AHV researchers. Finally, AHV research in Australia and many other countries relies strongly on theoretical models and animal experimentation due to the difficult of prototype human trials. Because of this constraint the experiments conducted in this thesis were limited to simulated AHV devices with normally sighted research participants and the true impact on blind people can only be regarded as approximated. In light of these constraints, this thesis has two general aims. The first aim is to investigate, evaluate and develop effective techniques for mobility assessment which will allow the objective comparison of different AHV system phosphene presentation methods. The second aim is to develop a useful display framework to guide the development of AHV information presentation, and use this framework to guide the development of an AHV simulation device. The first research contribution resulting from this work is a conceptual framework based on literature reviews of blind and low vision mobility, AHV technology, and computer vision. This framework incorporates a comprehensive number of factors which affect the effectiveness of information presentation in an AHV system. Experiments reported in this thesis have investigated a number of these factors using simulated AHV with human participants. It has been found that higher spatial resolution is associated with accurate walking (reduced veering), whereas higher display rate is associated with faster walking speeds. In this way it has been demonstrated that the conceptual framework supports and guides the development of an adaptive AHV system, with the dynamic adjustment of display properties in real-time. The second research contribution addresses mobility assessment which has been identified as an important issue in the AHV literature. This thesis presents the adaptation of a mobility assessment method from the blind and low vision literature to measure simulated AHV mobility performance using real-time computer based analysis. This method of mobility assessment (based on parameters for walking speed, obstacle contacts and veering) is demonstrated experimentally in two different indoor mobility courses. These experiments involved sixty-five participants wearing a head-mounted simulation device. The final research contribution in this thesis is the development and evaluation of an original real-time looming obstacle detector, based on coarse optical flow, and implemented on a Windows PocketPC based Personal Digital Assistant (PDA) using a CF card camera. PDA based processors are a preferred main processing platform for AHV systems due to their small size, light weight and ease of software development. However, PDA devices are currently constrained by restricted random access memory, lack of a floating point unit and slow internal bus speeds. Therefore any real-time software needs to maximise the use of integer calculations and minimise memory usage. This contribution was significant as the resulting device provided a selection of experimental results and subjective opinions.
|
28 |
Le colliculus supérieur dans la maladie de Parkinson : un biomarqueur possible ? / The superior colliculus in Parkinson's disease : a possible biomarker ?Bellot, Emmanuelle 06 December 2017 (has links)
Certains troubles visuo-moteurs observés dès le stade précoce de la maladie de Parkinson (MP) pourraient être liés à une altération du fonctionnement d’une structure sous-corticale reliée aux ganglions de la base, le colliculus supérieur (CS). L’objectif de cette thèse a été d’explorer l’état fonctionnel du CS chez le patient parkinsonien nouvellement diagnostiqué (de novo) avant et après instauration du traitement dopaminergique, afin d’évaluer son potentiel de biomarqueur. Pour cela, un paradigme expérimental d’Imagerie par Résonance Magnétique fonctionnelle (IRMf) a été développé, permettant d’imager avec succès l’activité fonctionnelle du CS et également du corps genouillé latéral (CGL) et de l’aire visuelle primaire V1 et de moduler leur activité via l’emploi de stimulation visuelle jouant sur de très faibles niveaux de contraste (<10%). Un test de psychophysique a également été développé, permettant d’estimer la réponse perceptuelle au contraste. Nous avons dans un premier temps testé notre protocole expérimental auprès de sujets sains d’âge variable afin d’évaluer le fonctionnement de ces trois régions d’intérêt (ROIs) au cours du vieillissement normal et de différencier les effets liés à l’âge de ceux potentiellement liés à la pathologie (Etude 1). Une diminution statistiquement significative de la réponse BOLD au sein du CGL et de V1 avec l’âge a été observée, ces réponses corrélant de plus parfaitement avec les réponses perceptuelles estimées en psychophysique. Les voies magnocellulaire et parvocellulaire semblent jouer un rôle dans cette perte de sensibilité au contraste de luminance liée à l’âge. Nous avons dans un second temps testé notre protocole auprès de patients parkinsoniens de novo avant et après instauration du premier traitement dopaminergique afin d’évaluer les effets de la MP et du traitement sur le fonctionnement de nos ROIs (Etude 2). Une altération précoce du traitement du contraste a été observée au sein du CS et du CGL chez les patients parkinsoniens, non normalisée par l’instauration du traitement dopaminergique. Ces travaux de thèse ont ainsi mis en évidence un déficit fonctionnel du CS et du CGL survenant précocement durant l’évolution de la MP, confirmé par nos analyses de connectivité effective. Ces résultats pourraient favoriser l’identification de déficits liés à un dysfonctionnement sensoriel de ces structures tout comme le développement de tests paraclinique et clinique impliquant ce système pour un diagnostic plus précoce de la maladie. / Some visuo-motor impairments observed in the early stages of Parkinson’s disease (PD) might be related to a dysfunction of a subcortical structure connected to the basal ganglia, the superior colliculus (SC). The aim of this PhD thesis was to explore the functional state of the SC in newly diagnosed (de novo) PD patients before and after dopaminergic treatment intake, in order to evaluate the potential value of the SC functioning as a biomarker. To do this, we developed a functional Magnetic Resonance Imaging (fMRI) experimental protocol, which successfully imaged the SC and also the lateral geniculate nucleus (LGN) and primary visual area V1 functional activity and modulate their activity by using visual stimuli with low luminance contrast levels (<10%). Additionally, we estimated the perceptual response to contrast by using a psychophysical task. We tested in a first time this experimental protocol on healthy subjects with varying age in order to evaluate the effect of normal aging on the functioning of these regions of interest (ROIs) and to distinguish the effects related to age from those potentially related to the pathology (Study 1). A significant progressive decrease of the BOLD amplitude with age was observed in the LGN and V1. These data were consistent with the response functions obtained with the psychophysical task. These results indicate a significant luminance contrast sensitivity decline with age of both the magnocellular and parvocellular pathways. In a second time, we tested our protocol on de novo PD patients before and after the introduction of the first dopaminergic treatment in order to assess the effects of PD and treatment on the ROIs functioning (Study 2). Our results highlighted an early alteration of the contrast processing for the SC and LGN in PD patients, with no normalization after dopaminergic treatment introduction. These findings indicate a functional deficit of the SC and LGN that appears early in the disease course, in line with our effective connectivity analyses. These results could favor the identification of deficits linked to sensory dysfunction of these structures as well as the development of paraclinical and clinical tests involving this system for an early diagnosis of the disease.
|
29 |
Satellite Image Processing with Biologically-inspired Computational Methods and Visual AttentionSina, Md Ibne January 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
|
30 |
Modélisation surfacique et volumique de la peau : classification et analyse couleur / Skin surface and volume modeling : clustering and color analysisBreugnot, Josselin 27 June 2011 (has links)
Grâce aux innovations technologiques récentes, l’exploration cutanée est devenue de plus en plus facile et précise. Le relevé topographique de la surface de peau par projection de franges ainsi que l’exploration des structures intradermiques par microscopie confocale in-vivo en sont des exemples parfaits. La mise en place de ces techniques et les développements sont présentés dans cette thèse. L’apport de l’imagerie est évident tant pour le traitement des acquisitions de ces appareils que pour l’évaluation de paramètres cutanés à partir de photographie par exemple. L’extension du modèle LIP niveaux de gris à la couleur a été réalisée pour apporter une évaluation proche de celle d’un expert grâce aux fondements logarithmiques du modèle, proches de la vision humaine. Enfin, la classification de données dans une image, sujet omniprésent dans le traitement d’images, a été abordée par les classifications hiérarchiques ascendantes, utilisant un cadre mathématique rigoureux grâce aux métriques ultramétriques / Thanks to recent developments, skin evaluation has become easier and more accurate. Topographical evaluation of skin surface by fringes projection as intra-dermal structures and exploration by in-vivo laser confocal microscopy are some examples. The use and development of these tools are developed in this thesis. Image processing contribution is obvious, as much for the treatment of these tools acquisitions, as for cutaneous parameters evaluation, based on digital camera acquisitions for example. Grey level LIP model extension to color has been realized in order to bring way of analysis near to the expert one, thanks to logarithmic bases of this model, very close to the human vision. At least, data clustering in images, a redundant topic in image analysis, has been approached by ascending hierarchical clustering, using rigorous mathematical properties thanks to the ultrametric distances
|
Page generated in 0.0668 seconds