• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 274
  • 248
  • 38
  • 25
  • 24
  • 11
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 723
  • 196
  • 182
  • 146
  • 127
  • 114
  • 101
  • 96
  • 80
  • 73
  • 71
  • 70
  • 60
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Study of large-scale coherent structures in the near field and transition regions of a mechanically oscillated planar jet.

Riese, Michael January 2009 (has links)
Enhancing the performance of mixing and fluid entrainment by excitation of quasi-steady jets has been a subject of research for more than three decades. During the 1980s a special emphasis was placed on mechanically oscillating planar jets and the possibility to augment thrust of V/STOL aircraft. However, during this time, little attention was paid to the classification of flow regimes, the development of coherent structures or the existence of different regions in the flow within the jet near field. For the present study, a large aspect ratio nozzle was oscillated in the direction transverse to the width of the nozzle in simple harmonic motion. For a constant nozzle height, the stroke length, oscillation frequency and jet velocity were systematically varied. Over 240 flow cases were examined using a novel method of phase-locked flow visualisation. Following an initial analysis of the acquired data, a small subset of flow conditions was selected for further quantitative investigation using Particle Image Velocimetry (PIV). The phase-locked flow visualisation led to the identification and classification of three separate flow regimes, the Base Flow, the Resonant Flow and the Bifurcation Flow Regimes. Each regime is linked to the other regimes by the presence of a small number of repetitive coherent structures in the form of starting and stopping vortices. The analysis revealed a relationship between the stroke-to-nozzle height ratio and the ratio of the forcing frequency to the natural vortex shedding frequency in the planar jet. This directly contradicts the relationship between the Strouhal and Reynolds numbers of the jet that was proposed by previous investigators. Comparison of phase-locked PIV and flow visualisation data confirms both, the validity of the new regime classification and the identification of relevant large-scale structures. Time-averaged vorticity data are also used to further illustrate the differences between the three flow regimes. Investigation of the time-averaged qualitative data for the Base and Resonant Flow Regimes show that three distinct flow regions exist within both regimes. Adjacent to the nozzle is the initial formation region, where all large-scale structures form. This is followed by a coherent near-field region in which the jet exhibits very little spread for both the Base and Resonant Flow Regimes. Within this region no pairing of the large-scale vortices from the opposing sides of the flow can be found. This region is followed by a transition region that is marked by the sudden breakup and dissipation of all visible large-scale coherent structures. The vortex formation distance is then investigated using the available PIV data and compared with the results of previous investigations. The data show that the formation distance depends on the jet velocity, oscillation frequency and the stroke length. The agreement with previous data is poor due to differences in the method of measurement. Quantitative data are also used to investigate the centreline velocity decay in relation to changes of the jet Reynolds number and stroke-to-nozzle height ratio. The results show that the velocity decay rate increases with increasing stroke length as is expected from findings of earlier studies. In addition the centreline velocity decay rates in the mean jet transition region appear to be constant for each stroke length in the cases examined. Finally, conclusions are drawn and recommendations for future work are presented. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1349701 / Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 2009
42

Automated spatial information retrieval and visualisation of spatial data

Walker, Arron R. January 2007 (has links)
An increasing amount of freely available Geographic Information System (GIS) data on the Internet has stimulated recent research into Spatial Information Retrieval (SIR). Typically, SIR looks at the problem of retrieving spatial data on a dataset by dataset basis. However in practice, GIS datasets are generally not analysed in isolation. More often than not multiple datasets are required to create a map for a particular analysis task. To do this using the current SIR techniques, each dataset is retrieved one by one using traditional retrieval methods and manually added to the map. To automate map creation the traditional SIR paradigm of matching a query to a single dataset type must be extended to include discovering relationships between different dataset types. This thesis presents a Bayesian inference retrieval framework that will incorporate expert knowledge in order to retrieve all relevant datasets and automatically create a map given an initial user query. The framework consists of a Bayesian network that utilises causal relationships between GIS datasets. A series of Bayesian learning algorithms are presented that automatically discover these causal linkages from historic expert knowledge about GIS datasets. This new retrieval model improves support for complex and vague queries through the discovered dataset relationships. In addition, the framework will learn which datasets are best suited for particular query input through feedback supplied by the user. This thesis evaluates the new Bayesian Framework for SIR. This was achieved by utilising a test set of queries and responses and measuring the performance of the respective new algorithms against conventional algorithms. This contribution will increase the performance and efficiency of knowledge extraction from GIS by allowing users to focus on interpreting data, instead of focusing on finding which data is relevant to their analysis. In addition, they will allow GIS to reach non-technical people.
43

Seeing is understanding : the effect of visualisation in understanding programming concepts

Zagami, Jason Anthony January 2008 (has links)
How and why visualisations support learning was the subject of this qualitative instrumental collective case study. Five computer programming languages (PHP, Visual Basic, Alice, GameMaker, and RoboLab) supporting differing degrees of visualisation were used as cases to explore the effectiveness of software visualisation to develop fundamental computer programming concepts (sequence, iteration, selection, and modularity). Cognitive theories of visual and auditory processing, cognitive load, and mental models provided a framework in which student cognitive development was tracked and measured by thirty-one 15-17 year old students drawn from a Queensland metropolitan secondary private girls’ school, as active participants in the research. Seventeen findings in three sections increase our understanding of the effects of visualisation on the learning process. The study extended the use of mental model theory to track the learning process, and demonstrated application of student research based metacognitive analysis on individual and peer cognitive development as a means to support research and as an approach to teaching. The findings also forward an explanation for failures in previous software visualisation studies, in particular the study has demonstrated that for the cases examined, where complex concepts are being developed, the mixing of auditory (or text) and visual elements can result in excessive cognitive load and impede learning. This finding provides a framework for selecting the most appropriate instructional programming language based on the cognitive complexity of the concepts under study.
44

Study of large-scale coherent structures in the near field and transition regions of a mechanically oscillated planar jet.

Riese, Michael January 2009 (has links)
Enhancing the performance of mixing and fluid entrainment by excitation of quasi-steady jets has been a subject of research for more than three decades. During the 1980s a special emphasis was placed on mechanically oscillating planar jets and the possibility to augment thrust of V/STOL aircraft. However, during this time, little attention was paid to the classification of flow regimes, the development of coherent structures or the existence of different regions in the flow within the jet near field. For the present study, a large aspect ratio nozzle was oscillated in the direction transverse to the width of the nozzle in simple harmonic motion. For a constant nozzle height, the stroke length, oscillation frequency and jet velocity were systematically varied. Over 240 flow cases were examined using a novel method of phase-locked flow visualisation. Following an initial analysis of the acquired data, a small subset of flow conditions was selected for further quantitative investigation using Particle Image Velocimetry (PIV). The phase-locked flow visualisation led to the identification and classification of three separate flow regimes, the Base Flow, the Resonant Flow and the Bifurcation Flow Regimes. Each regime is linked to the other regimes by the presence of a small number of repetitive coherent structures in the form of starting and stopping vortices. The analysis revealed a relationship between the stroke-to-nozzle height ratio and the ratio of the forcing frequency to the natural vortex shedding frequency in the planar jet. This directly contradicts the relationship between the Strouhal and Reynolds numbers of the jet that was proposed by previous investigators. Comparison of phase-locked PIV and flow visualisation data confirms both, the validity of the new regime classification and the identification of relevant large-scale structures. Time-averaged vorticity data are also used to further illustrate the differences between the three flow regimes. Investigation of the time-averaged qualitative data for the Base and Resonant Flow Regimes show that three distinct flow regions exist within both regimes. Adjacent to the nozzle is the initial formation region, where all large-scale structures form. This is followed by a coherent near-field region in which the jet exhibits very little spread for both the Base and Resonant Flow Regimes. Within this region no pairing of the large-scale vortices from the opposing sides of the flow can be found. This region is followed by a transition region that is marked by the sudden breakup and dissipation of all visible large-scale coherent structures. The vortex formation distance is then investigated using the available PIV data and compared with the results of previous investigations. The data show that the formation distance depends on the jet velocity, oscillation frequency and the stroke length. The agreement with previous data is poor due to differences in the method of measurement. Quantitative data are also used to investigate the centreline velocity decay in relation to changes of the jet Reynolds number and stroke-to-nozzle height ratio. The results show that the velocity decay rate increases with increasing stroke length as is expected from findings of earlier studies. In addition the centreline velocity decay rates in the mean jet transition region appear to be constant for each stroke length in the cases examined. Finally, conclusions are drawn and recommendations for future work are presented. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1349701 / Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 2009
45

Méthodes In-Situ et In-Transit : vers un continuum entre les applications interactives et offines à grande échelle / In-Situ and In-Transit methods : toward a continuum between interactive and offline application at high scale

Dreher, Matthieu 25 February 2015 (has links)
Les simulations paralllèles sont devenues des outils indispensables dans de nombreux domaines scientifiques. Pour simuler des phénomènes complexes, ces simulations sont exécutées sur de grandes machines parallèles. La puissance de calcul de ces machines n'a cessé de monter permettant ainsi le traitement de simulations de plus en plus imposantes. En revanche, les systèmes d'I/O nécessaires à la sauvegarde des données produites par les simulations ont suivit une croissance beaucoup plus faible. Actuellement déjà, il est difficile pour les scientifiques de sauvegarder l'ensemble des données désirées et d'avoir suffisament de puissance de calcul pour les analyser par la suite. A l'ère de l'Exascale, on estime que moins de 1% des données produites par une simulation pourronts être sauvegardées. Ces données sont pourtant une des clés vers des découvertes scientifiques majeures. Les traitements in-situ sont une solution prometteuse à ce problème. Le principe est d'effectuer des analyses alors que la simulation est en cours d'exécution et que les données sont encore en mémoire. Cette approche permet d'une part d'éviter le goulot d'étranglement au niveau des I/O mais aussi de profiter de la puissance de calcul offerte par les machines parallèles pour effectuer des traitements lourds en calcul. Dans cette thèse, nous proposons d'utiliser le paradigme du dataflow pour permettre la construction d'applications in-situ complexes. Pour cela, nous utilisons l'intergiciel FlowVR permettant de coupler des codes parallèles hétérogènes en créant des canaux de communication entre eux afin de former un graphe. FlowVR dispose de suffisament de flexibilité pour permettre plusieurs stratégies de placement des processus d'analyses que cela soit sur les nœuds de la simulation, sur des cœurs dédiés ou des nœuds dédiés. De plus, les traitements in-situ peuvent être exécutés de manière asynchrone permettant ainsi un faible impact sur les performances de la simulation. Pour démontrer la flexibilité de notre approche, nous nous sommes intéressés au cas à la dynamique moléculaire et plus particulièrement Gromacs, un code de simulation de dynamique moléculaire couramment utilisé par les biologistes pouvant passer à l'échelle sur plusieurs milliers de coeurs. En étroite collaboration avec des experts du domaine biologique, nous avons contruit plusieurs applications. Notre première application consiste à permettre à un utilisateur de guider une simulation de dynamique moléculaire vers une configuration souhaitée. Pour cela, nous avons couplé Gromacs à un visualiseur et un bras haptique. Grâce à l'intégration de forces émises par l'utilisateur, celui ci peut guider des systèmes moléculaires de plus d'un million d'atomes. Notre deuxième application se concentre sur les simulations longues sur les grandes machines parallèles. Nous proposons de remplacer la méthode native d'écriture de Gromacs et de la déporter dans notre infrastructure en utilisant deux méthodes distinctes. Nous proposons également un algorithme de rendu parallèle pouvant s'adapter à différentes configurations de placements. Notre troisième application vise à étudier les usages que peuvent avoir les biologistes avec les applications in-situ. Nous avons développé une infrastructure unifiée permettant d'effectuer des traitements aussi bien sur des simulations intéractives, des simulations longues et en post-mortem. / Parallel simulations have become a powerwul tool in several scientific areas. To simulate complex phenomena, these simulations are running on large parallel machines. The computational power available on those machines has increased a lot in the last years allowing to simulate very large models. Unfortunately, the I/O capabilities necessary to save the data produced by simulation has not grown at the same pace. Nowadays, it is already difficult to save all the needed data and to have enough computational power to analyse them afterwards. At the exascale time frame, it is expected that less than 1% of the total data produced by simulations will be saved. Yet, these data may lead to major discoveries. In-situ analytics are a promising solution to this problem. The idea is to treat the data while the simulation is still running and the data are in memory. This way, the I/O bottleneck is avoided and the computational power avaible on parallel machines can be used as well for analytics. In this thesis, we propose to use the dataflow paradigm to enable the construction of complex in-situ applications. We rely on the FlowVR middleware which is designed to couple parallel heterogeneous codes by creating communication channels between them to form a graph. FlowVR is flexible enough to allow several placement strategies on simulation nodes, dedicated cores or dedicated nodes. Moreover, in-situ analytics are executed asynchronously leading to a low impact on the simulation performances. To demonstrate the flexibility of our approach, we used Gromacs, a commonly used parallel molecular dynamic simulation package, as application target. With the help of biology experts, we have built several realistic applications. The first one is allowing a user to steer a molecular simulation toward a desired state. To do so, we have couple Gromacs with a live viewer and an haptic device. The user can then apply forces to drive molecular systems of more than 1 million atoms. Our second application focus on long simulation running in batch mode on supercomputers. We replace the native writing method of Gromacs by two methods in our infrastructure. We also propose a implemented a flexible rendering algorithm able to able to various placement strategies. Finally, we study the possible usage o biologists with our infrastructure. We propose a unifed framework able to run treatments on interactive simulation, long simulations and in post-process.
46

Assisting digital forensic analysis via exploratory information visualisation

Hales, Gavin January 2016 (has links)
Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.
47

La visualisation d’information à l’ère du Big Data : résoudre les problèmes de scalabilité par l’abstraction multi-échelle / Information Visualization in the Big Data era : tackling scalability issues using multiscale abstractions

Perrot, Alexandre 27 November 2017 (has links)
L’augmentation de la quantité de données à visualiser due au phénomène du Big Data entraîne de nouveaux défis pour le domaine de la visualisation d’information. D’une part, la quantité d’information à représenter dépasse l’espace disponible à l’écran, entraînant de l’occlusion. D’autre part, ces données ne peuvent pas être stockées et traitées sur une machine conventionnelle. Un système de visualisation de données massives doit permettre la scalabilité de perception et de performances. Dans cette thèse, nous proposons une solution à ces deux problèmes au travers de l’abstraction multi-échelle des données. Plusieurs niveaux de détail sont précalculés sur une infrastructure Big Data pour permettre de visualiser de grands jeux de données jusqu’à plusieurs milliards de points. Pour cela, nous proposons deux approches pour implémenter l’algorithme de canopy clustering sur une plateforme de calcul distribué. Nous présentons une application de notre méthode à des données géolocalisées représentées sous forme de carte de chaleur, ainsi qu’à des grands graphes. Ces deux applications sont réalisées à l’aide de la bibliothèque de visualisation dynamique Fatum, également présentée dans cette thèse. / With the advent of the Big Data era come new challenges for Information Visualization. First, the amount of data to be visualized exceeds the available screen space. Second, the data cannot be stored and processed on a conventional computer. To alleviate both of these problems, a Big Data visualization system must provide perceptual and performance scalability. In this thesis, we propose to use multi-scale abstractions as a solution to both of these issues. Several levels of detail can be precomputed using a Big Data Infrastructure in order to visualize big datasets up to several billion points. For that, we propose two approaches to implementing the canopy clustering algorithm for a distributed computation cluster. We present applications of our method to geolocalized data visualized through a heatmap, and big graphs. Both of these applications use the dynamic visualization library, which is also presented in this thesis
48

Development of multivariate data visualisation software and searches for Lepton Jets at CMS

Radburn-Smith, Benjamin Charles January 2013 (has links)
Despite advances in multivariate visualisations and computer graphics, allowing for effective implementations, most particle physics analyses still rely on conventional data visualisations. The currently available software implementing these techniques has been found to be inadequate for use with the large volume of multivariate data produced from modern particle physics experiments. After a design and development period, a novel piece of software, DataViewer, was produced. DataViewer was used as part of a physics analysis at the CMS experiment, searching for an associated Higgs decaying through a dark sector into collimated groups of electrons, called Electron Jets. Observation of such a signature could explain astrophysical anomalies found by numerous telescopes. The full 2011 dataset, equivalent to an integrated luminosity of 4.83 fb^(-1) at a centre of mass energy of sqrt(s) = 7 TeV, recorded by the experiment was analysed. DataViewer was found to be extremely powerful in rapidly identifying interesting attributes of the signature which could then be exploited in the analysis. Additionally it could be used for cross checking other complex techniques, including multivariate classifiers. No evidence was found for the production of a Higgs boson in association with a Z boson, where the Higgs subsequently decays to Electron Jets. Upper limits on the production of benchmark models were set at the 95% Confidence Level.
49

Effects of prior spatial experience, gender, 3d solid computer modelling and different cognitive styles on spatial visualisation skills of graphic design students at a rural-based South African university

Kok, Petrus Jacobus, Bayaga, A. January 2018 (has links)
Submitted in fulfilment to the Department of Planning and Administration of the requirements for the degree of Doctor in Education in the Faculty of Education at the University of Zululand’ 2018. / Studies pertaining to the relationship and effect of prior spatial experience, gender and how they influence three-dimensional (3D) solid modelling as well as different cognitive styles on the spatial visualisation skills has little to no evidence, especially in graphics design students at rural–universities. Additionally, graphics design students often struggle to understand, process and convert multi-faceted objects from orthographic two-dimensional (2D) views into isometric projections (3D). However, ongoing study established a strong link between spatial visualisation skills and the effective completion of graphics design content. Moreover, conventional teaching and learning practice using textbooks, physical models, and pencil drawings were found to be insufficient for improving spatial visualisation skills among pre-service teacher students at a rural-university. These challenges formed the basis of the present study which focused on the relation and effect of prior spatial experience, gender, three-dimensional (3D) solid modelling software and different cognitive styles on the spatial visualisation skills of graphics design students at a rural–university. Students at this university are from disadvantaged and under-resourced schools and they arrive at university with little or no computer-based experience. Underpinned by Piaget’s perception and imagery theory, the study determined the effect of 3D solid computer modelling on students’ spatial visualisation skills. The study was carried out at the University of Zululand (UNIZULU) a rural-based university, comprising 200 pre-service teachers undertaking a graphics design module. Research method included mixed methods sequential research design. The study employed a spatial experience questionnaire, the Purdue Spatial Visualisation Test and semi-structured interviews to evaluate students’ prior spatial experiences, gender differences, spatial visualisation skills and cognitive styles before and after a 3D solid computer modelling intervention. Based on the research focus, the findings showed no relation between prior spatial experience, gender and spatial visualisation skills, however, mathematics and sketching activity emerged as strong predictors for spatial visualisation. The findings also showed that there was a significant difference with a moderate positive effect in the spatial visualisation skills between the students in the experimental group and those in the control group. As a consequence, a model was developed, aimed at improving rural-based instruction and learning for 2D to 3D drawing.
50

Automated Visualisation of Product Deployment / Automatisk Visualisering av Produkt Distribution

Chowdary, Milton January 2022 (has links)
The development of large products, whether it is software or hardware, faces many challenges. Two of these challenges are to keep everyone involved up-to-date on the latest developments, and to get a clear overview of the components of the product. A proposed solution is to have a graph presenting all the necessary information about the product. The issue with having a graph of a constantly changing product is that it requires a lot of maintenance to keep it up-to-date. This thesis presents the implementation of a software for Ericsson, that can gather automatically the required information about a given product and creates a graph to present it. The software traverses a file structure, containing information about a product, and stores it. This information is then used to create two different graphs: a tree graph and a box graph.The graphs were evaluated, both by the author and by the team at Ericsson, based on visualisation principles.  The results show that the automatically gathered information is effective and can communicate the information needed. The tree graph receives slightly favourable reviews in comparison to the currently available and manually created graph. However, limitations for graph layout on the visualisation tool made the graphs larger than necessary and, therefore, harder to understand. In order to achieve a better result, other visualisation tools could be considered. The software created tree graphs that are useable at Ericsson, and could prove helpful for development.

Page generated in 0.1094 seconds