• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 275
  • 249
  • 38
  • 25
  • 24
  • 11
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 727
  • 198
  • 183
  • 147
  • 128
  • 115
  • 101
  • 97
  • 80
  • 74
  • 72
  • 70
  • 61
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

La visualisation : une application de la psychosynthèse auprès de groupes-classes de 2e et de 4e secondaire /

St-Germain, Johanne, January 2000 (has links)
Mémoire (M.Ed.)--Université du Québec à Chicoutimi. / Document électronique également accessible en format PDF. CaQCU
392

Agrégation spatiotemporelle pour la visualisation de traces d'exécution / Spatiotemporal Aggregation for Execution Trace Visualization

Dosimont, Damien 10 June 2015 (has links)
Les techniques de visualisation de traces sont fréquemment employées par les développeurs pour comprendre, déboguer, et optimiser leurs applications.La plupart des outils d'analyse font appel à des représentations spatiotemporelles, qui impliquent un axe du temps et une représentation des ressources, et lient la dynamique de l'application avec sa structure ou sa topologie.Toutefois, ces dernières ne répondent pas au problème de passage à l'échelle de manière satisfaisante. Face à un volume de trace de l'ordre du Gigaoctet et une quantité d'évènements supérieure au million, elles s'avèrent incapables de représenter une vue d'ensemble de la trace, à cause des limitations imposées par la taille de l'écran, des performances nécessaires pour une bonne interaction, mais aussi des limites cognitives et perceptives de l'analyste qui ne peut pas faire face à une représentation trop complexe. Cette vue d'ensemble est nécessaire puisqu'elle constitue un point d'entrée à l'analyse~; elle constitue la première étape du mantra de Shneiderman - Overview first, zoom and filter, then details-on-demand -, un principe aidant à concevoir une méthode d'analyse visuelle.Face à ce constat, nous élaborons dans cette thèse deux méthodes d'analyse, l'une temporelle, l'autre spatiotemporelle, fondées sur la visualisation. Elles intègrent chacune des étapes du mantra de Shneiderman - dont la vue d'ensemble -, tout en assurant le passage à l'échelle.Ces méthodes sont fondées sur une méthode d'agrégation qui s'attache à réduire la complexité de la représentation tout en préservant le maximum d'information. Pour ce faire, nous associons à ces deux concepts des mesures issues de la théorie de l'information. Les parties du système sont agrégées de manière à satisfaire un compromis entre ces deux mesures, dont le poids de chacune est ajusté par l'analyste afin de choisir un niveau de détail. L'effet de la résolution de ce compromis est la discrimination de l'hétérogénéité du comportement des entités composant le système à analyser. Cela nous permet de détecter des anomalies dans des traces d'applications multimédia embarquées, ou d'applications de calcul parallèle s'exécutant sur une grille.Nous avons implémenté ces techniques au sein d'un logiciel, Ocelotl, dont les choix de conception assurent le passage à l'échelle pour des traces de plusieurs milliards d'évènements. Nous proposons également une interaction efficace, notamment en synchronisant notre méthode de visualisation avec des représentations plus détaillées, afin de permettre une analyse descendante jusqu'à la source des anomalies. / Trace visualization techniques are commonly used by developers to understand, debug, and optimize their applications.Most of the analysis tools contain spatiotemporal representations, which is composed of a time line and the resources involved in the application execution. These techniques enable to link the dynamic of the application to its structure or its topology.However, they suffer from scalability issues and are incapable of providing overviews for the analysis of huge traces that have at least several Gigabytes and contain over a million of events. This is caused by screen size constraints, performance that is required for a efficient interaction, and analyst perceptive and cognitive limitations. Indeed, overviews are necessary to provide an entry point to the analysis, as recommended by Shneiderman's emph{mantra} - Overview first, zoom and filter, then details-on-demand -, a guideline that helps to design a visual analysis method.To face this situation, we elaborate in this thesis several scalable analysis methods based on visualization. They represent the application behavior both over the temporal and spatiotemporal dimensions, and integrate all the steps of Shneiderman's mantra, in particular by providing the analyst with a synthetic view of the trace.These methods are based on an aggregation method that reduces the representation complexity while keeping the maximum amount of information. Both measures are expressed using information theory measures. We determine which parts of the system to aggregate by satisfying a trade-off between these measures; their respective weights are adjusted by the user in order to choose a level of details. Solving this trade off enables to show the behavioral heterogeneity of the entities that compose the analyzed system. This helps to find anomalies in embedded multimedia applications and in parallel applications running on a computing grid.We have implemented these techniques into Ocelotl, an analysis tool developed during this thesis. We designed it to be capable to analyze traces containing up to several billions of events. Ocelotl also proposes effective interactions to fit with a top-down analysis strategy, like synchronizing our aggregated view with more detailed representations, in order to find the sources of the anomalies.
393

La Visualisation in vivo des « espèces oxygénées radiculaires» au niveau des cellules ganglionnaires de la rétine

Mears, Katrina A. 04 1900 (has links)
No description available.
394

Acquisition et traitement d’images hyperspectrales pour l’aide à la visualisation peropératoire de tissus vitaux / Acquisition and processing of hyperspectral images for assisted intraoperative visualization of vital tissues

Nouri Kridiss, Dorra 26 May 2014 (has links)
L’imagerie hyperspectrale issue de la télédétection, va devenir une nouvelle modalité d’imagerie médicale pouvant assister le diagnostic de plusieurs pathologies via la détection des marges tumorales des cancers ou la mesure de l’oxygénation des tissus. L’originalité de ce travail de thèse est de fournir au chirurgien en cours d’intervention une vision améliorée du champ opératoire grâce à une image RGB affichée sur écran, résultat de traitements des cubes hyperspectraux dans le visible, le proche infrarouge et le moyen infrarouge (400-1700 nm). Notre application permet la détection des tissus difficilement détectables et vitaux comme l’uretère. Deux prototypes d’imagerie hyperspectrale utilisant les filtres programmables à cristaux liquides ont été développés, calibrés et mis en oeuvre dans de nombreuses campagnes d’expérimentations précliniques. Les résultats présentés dans cette thèse permettent de conclure que les méthodes de sélection de bandes sont les plus adaptées pour une application interventionnelle de l’imagerie hyperspectrale en salle d’opération puisqu’elles affichent une quantité maximale d’information, un meilleur rendu naturel de l’image RGB résultante et une amélioration maximale de la visualisation de la scène chirurgicale puisque le contraste dans l’image résultat entre le tissu d’intérêt et les tissus environnants a été triplé par rapport à l’image visualisée par l’oeil du chirurgien. Le principal inconvénient de ces méthodes réside dans le temps d’exécution qui a été nettement amélioré par les méthodes combinées proposées. De plus, la bande spectrale du moyen infrarouge est jugée plus discriminante pour explorer les données hyperspectrales associées à l’uretère puisque la séparabilité entre les tissus y est nettement supérieure par rapport à la gamme spectrale du visible. / Hyperspectral imagery initially applied for remote sensing will become a new medical imaging modality that may assist the diagnosis of several diseases through the detection of tumoral margins of cancers or the measure of the tissue oxygenation. The originality of this work is to provide, during surgery, an improved vision of the operative field with a RGB image displayed on screen, as the result of hyperspectral cubes processing in the visible, near infrared and mid-infrared (400-1700 nm). Our application allows the detection of hard noticeable and vital tissues as the ureter. Two hyperspectral imaging prototype using liquid crystal tunable filters have been developed, calibrated and implemented in many preclinical experiments campaigns. The results presented in this thesis allow to conclude that the methods of band selection are most suitable for interventional application of hyperspectral imaging in operating room since they show a maximal amount of information, a better natural rendering of the resulting RGB image and a maximal improvement of visualization of the surgical scene as the contrast in the resulting image between the tissue of interest and the surrounding tissues was tripled compared to the image viewed by the surgeon’s eye. The main drawback of these methods lies in the execution time which was significantly improved by the proposed combined methods. Furthermore, the mid-infrared spectral range is considered more discriminating to explore hyperspectral data associated with the ureter as the separability between tissues is significantly higher compared to the visible spectral range.
395

Analysis of online news media through visualisation and text clustering

Pasi, Niharika January 2018 (has links)
Online news has grown in frequency and popularity as a convenient source of information for several years. A result of this drastic surge is the increased competition for viewer-ship and prolonged relevance of online news websites. Higher demands by internet audiences have led to the use of sensationalism such as ‘clickbait’ articles or ‘fake news’ to attract more viewers. The subsequent shift in the journalistic approach in new media opened new opportunities to study the behaviour and intent behind the news content. As news publications cater their news to a specific target audience, conclusions about said news outlets and their readers can be deduced from the content they wish to broadcast. In order to understand the nature behind the publication’s choice of producing content, this thesis uses automated text categorisation as a means to analyse the words and phrases used by most news outlets. The thesis acts as a case study for approximately 143,000 online news articles from 15 different publications focused on the United States between the years 2016 and 2017. The focus of this thesis is to create a framework that observes how news articles group themselves based on the most relevant terms in their corpora. Similarly, other forms of analyses were performed to find similar insights that may give an idea about the news structure over a certain period of time. For this thesis, a preliminary quantitative analysis was also conducted before data processing, followed by applying K-means clustering to these articles post-cleansing. The overall categorisation approach and visual analysis provided sufficient data to re-use this framework with further adjustments. The cluster groups deduced that the most common news categories or genres for the selected publications were either politics - with special focus on the U.S. presidential elections - or crime-related news within the U.S and around the world. The visual formations of these clusters heavily implied that the above two categories were distributed even within groups containing other genres like finance or infotainment. Moreover, the added factor of churning out multiple articles and stories per day suggest that mainstream online news websites continue to use broadcast journalism as their main form of communication with their audiences
396

Use of Machine Learning Algorithms to Propose a New Methodology to Conduct, Critique and Validate Urban Scale Building Energy Modeling

January 2017 (has links)
abstract: City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it’s urban level energy systems, buildings’ operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study. The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses “Phoenix- climate” based CBECS-2012 survey microdata for analysis and validation. Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models. The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster “mediod” (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy simulations and post that the retrofit decision-making. Further, the methodology is validated by conducting Monte-Carlo simulations on 13 key input simulation parameters. The sensitivity analysis of these 13 parameters is utilized to identify the optimum retrofits. From the sample analysis, the envelope parameters are found to be more sensitive towards the EUI of the building and thus retrofit packages should also be directed to maximize the energy usage reduction. / Dissertation/Thesis / Masters Thesis Architecture 2017
397

Analyse, à l'aide d'oculomètres, de techniques de visualisation UML de patrons de conception pour la compréhension de programmes

Cepeda Porras, Gerardo January 2008 (has links)
No description available.
398

Algorithmes et structures de données compactes pour la visualisation interactive d’objets 3D volumineux / Algorithms and compact data structures for interactive visualization of gigantic 3D objects

Jamin, Clément 25 September 2009 (has links)
Les méthodes de compression progressives sont désormais arrivées à maturité (les taux de compression sont proches des taux théoriques) et la visualisation interactive de maillages volumineux est devenue une réalité depuis quelques années. Cependant, même si l’association de la compression et de la visualisation est souvent mentionnée comme perspective, très peu d’articles traitent réellement ce problème, et les fichiers créés par les algorithmes de visualisation sont souvent beaucoup plus volumineux que les originaux. En réalité, la compression favorise une taille réduite de fichier au détriment de l’accès rapide aux données, alors que les méthodes de visualisation se concentrent sur la rapidité de rendu : les deux objectifs s’opposent et se font concurrence. A partir d’une méthode de compression progressive existante incompatible avec le raffinement sélectif et interactif, et uniquement utilisable sur des maillages de taille modeste, cette thèse tente de réconcilier compression sans perte et visualisation en proposant de nouveaux algorithmes et structures de données qui réduisent la taille des objets tout en proposant une visualisation rapide et interactive. En plus de cette double capacité, la méthode proposée est out-of-core et peut traiter des maillages de plusieurs centaines de millions de points. Par ailleurs, elle présente l’avantage de traiter tout complexe simplicial de dimension n, des soupes de triangles aux maillages volumiques. / Progressive compression methods are now mature (obtained rates are close to theoretical bounds) and interactive visualization of huge meshes has been a reality for a few years. However, even if the combination of compression and visualization is often mentioned as a perspective, very few papers deal with this problem, and the files created by visualization algorithms are often much larger than the original ones. In fact, compression favors a low file size to the detriment of a fast data access, whereas visualization methods focus on rendering speed : both goals are opposing and competing. Starting from an existing progressive compression method incompatible with selective and interactive refinements and usable on small-sized meshes only, this thesis tries to reconcile lossless compression and visualization by proposing new algorithms and data structures which radically reduce the size of the objects while supporting a fast interactive navigation. In addition to this double capability, our method works out-of-core and can handle meshes containing several hundreds of millions vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, which includes triangle soups or volumetric meshes.
399

Design and evaluation of an educational tool for understanding functionality in flight simulators : Visualising ARINC 610C

Söderström, Arvid, Thorheim, Johanna January 2017 (has links)
The use of simulation in aircraft development and pilot training is essential as it saves time and money. The ARINC 610C standard describes simulator functionality, and is developed to streamline the use of flight simulators. However, the text based standard lacks overview and function descriptions are hard to understand for the simulator developers, who are the main users. In this report, an educational software tool is conceptualised to increase usability of ARINC 610C. The usability goals and requirements were established through multiple interviews and two observation studies. Consequently, six concepts were produced, and evaluated in a workshop with domain experts. Properties from the evaluated concepts were combined in order to form one concluding concept. A prototype was finally developed and evaluated in usability tests with the potential user group. The results from the heuristic evaluation, the usability tests, and a mean system usability score of 79.5 suggests that the prototyped system, developed for visualising ARINC 610C, is a viable solution.
400

Multi-Touch Interfaces for Public Exploration and Navigation in Astronomical Visualizations

Bosson, Jonathan January 2017 (has links)
OpenSpace is an interactive data visualization software system that portrays the entire known universe in a 3D simulation. Current navigation interface requires explanations, which prohibits OpenSpace to be displayed effectively in public exhibitions. Research has been shown that using large tangible touch surfaces with a multi-touch navigation interface is more engaging to users than mouse and keyboard as well as enhances the understanding of navigation control, thus decreasing the required instructions to learn the systems user interface. This thesis shows that combining a velocity-based interaction model together with a screen-space direct-manipulation formulation produces a user-friendly interface. Giving the user precise control of objects and efficient travels in between in the vastness of space. This thesis presents the work of integrating a multi-touch navigation interface with a combined formulation of velocity-based interaction and screen-space direct-manipulation into the software framework OpenSpace.

Page generated in 0.0735 seconds