• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 274
  • 249
  • 38
  • 25
  • 24
  • 11
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 726
  • 198
  • 183
  • 147
  • 128
  • 115
  • 101
  • 97
  • 80
  • 73
  • 72
  • 70
  • 61
  • 56
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Development Of A Plume With Off-Source Volumetric Heating

Venkatakrishnan, L 07 1900 (has links) (PDF)
No description available.
122

Modèles de mélange pour la classification non supervisée de données qualitatives et mixtes / Model-based clustering for categorical and mixed data sets

Marbac-Lourdelle, Matthieu 23 September 2014 (has links)
Cette thèse propose une contribution originale pour la classification non supervisée de données qualitatives ou de données mixtes. Les approches proposées sont à base de modèles probabilistes ayant pour but de modéliser la distribution des données observées. Dans les cas de données qualitatives ou mixtes, il est d'usage de supposer l'indépendance entre les variables conditionnellement à la classe. Cependant, cette approche s'avère biaisée lorsque l'hypothèse d'indépendance conditionnelle est erronée. L'objet de cette thèse est d'étudier et de proposer des modèles relâchant l'hypothèse d'indépendance conditionnelle. Ceux-ci doivent permettre de résumer chaque classe par quelques paramètres significatifs. La première partie de cette thèse porte sur la classification non supervisée de données qualitatives. Lorsque ces données sont corrélées au sein des classes, le statisticien est confronté à de nombreux problèmes combinatoires (grand nombre de paramètres et choix de modèle complexe). Notre approche consiste à relâcher l'hypothèse d'indépendance conditionnelle en regroupant les variables en blocs conditionnellement indépendants. Cette méthode nous amène à présenter deux modèles probabilistes. Ceux-ci définissent la distribution d'un bloc de manière à limiter le nombre de paramètres du modèle tout en fournissant un modèle facilement interprétable. Le premier modélise la distribution d'un bloc de variables par le mélange des deux distributions de dépendances extrêmes tandis que le second modèle utilise une distribution multinomiale par modes. La seconde partie de cette thèse porte sur la classification non supervisée de données mixtes. La difficulté spécifique à de telle données est due à l'absence de distribution de référence pour le cas de variables de différentes natures. Ainsi, on souhaite définir un modèle probabiliste respectant les deux contraintes suivantes. Tout d'abord, les distributions marginales de chacune des composantes doivent être des distributions classiques afin de faciliter l'interprétation du modèle. De plus, le modèle doit permettre de caractériser les dépendances intra-classes par quelques paramètres significatifs. Ce cahier des charges nous amène naturellement à utiliser la théorie des copules. Ainsi, nous proposons un modèle de mélange de copules gaussiennes que nous considérons comme la contribution majeure de cette thèse. Pour ce modèle, nous effectuons une inférence bayésienne à partir d'un échantillonneur de Gibbs. Les critères d'information classiques (BIC, ICL), nous permettent de répondre aux problématiques de choix de modèles. / This work is our contribution to the cluster analysis of categorical and mixed data. The methods proposed in this manuscript modelize the data distribution in a probabilistic framework. When the data are categorical or mixed, the classical model assumes the independence between the variables conditionally on class. However, this approach is biased when the variables are intra-class correlated. The aim of this thesis is to study and to present some mixture models which relax the conditional independence assumption. Moreover, they have to summarize each class with few characteristic parameters. The first part of this manuscript is devoted to the cluster analysis of categorical data. The categorical variables are difficult to cluster since they leave the statistician facing with many combinatorial challenges. In this context, our contribution consists in two parsimonious mixture models which allow to cluster categorical data presenting intra-class dependencies. The main idea of these models is to group the variables into conditionally independent blocks. By setting specific distributions for these blocks, both models consider the intra-classdependencies between the variables. The first approach modelizes the block distribution by a mixture of two extreme dependency distributions while the second approach modelizes it by a multinomial distribution per modes. The study of the cluster analysis of mixed data sets is the second objective of this work. The challenge is due to the lack of classical distributions for mixed variables. Thus, we defined a probabilistic model respecting two main constraints. Firstly, the one-dimensional margin distributions of the components are classical for each variables. Secondly, the model characterizes the main intra-class dependencies. This model is defined as a mixture of Gaussian copulas. The Bayesian inference is performed via a Gibbs sampler. The classical information criteria (BIC, ICL) permit to perform the model selection.
123

CloneCompass: visualizations for code clone analysis

Wang, Ying 05 May 2020 (has links)
Code clones are identical or similar code fragments in a single software system or across multiple systems. Frequent copy-paste-modify activities and reuse of existing systems result in maintenance difficulties and security issues. Addressing these problems requires analysts to undertake code clone analysis, which is an intensive process to discover problematic clones in existing software. To improve the efficiency of this process, tools for code clone detection and analysis, such as Kam1n0 and CCFinder, were created. Kam1n0 is an efficient code clone search engine that facilitates assembly code analysis. However, Kam1n0 search results can contain millions of function-clone pairs, and efficiently exploring and comprehensively understanding the resulting data can be challenging. This thesis presents a design study whereby we collaborated with analyst stakeholders to identify requirements for a tool that visualizes and scales to millions of function-clone pairs. These requirements led to the design of an interactive visual tool, CloneCompass, consisting of novel TreeMap Matrix and Adjacency Matrix visualizations to aid in the exploration of assembly code clones extracted from Kam1n0. We conducted a preliminary evaluation with the analyst stakeholders, and we show how CloneCompass enables these users to visually and interactively explore assembly code clones detected by Kam1n0 with suspected vulnerabilities. To further validate our tool and extend its usability to source code clones, we carried out a Linux case study, where we explored the clones in the Linux kernel detected by CCFinder and gained a number of insights about the cloning activities that may have occurred in the development of the Linux kernel. / Graduate
124

Fluid Interactive Information Visualization: A Visualization Tool for Book Recommendation

Xu, Yinglai January 2017 (has links)
The accuracy of recommender systems has been largely discussed and the user experience of the recommended systems is now becoming a new focus. Combining recommendations with information visualization (InfoVis) can be a way to improve the acceptance of the system. This thesis investigates how InfoVis can support the recommended systems, aiming at improving the enjoyment and engagement of the user experience. Three prototypes are designed to evaluate the impact of using InfoVis and fluid interactive InfoVis on user engagement and enjoyment with exploratory of recommended books. The methods of product reaction card and Likert questionnaire are used during the evaluation. Results suggest that InfoVis is a viable way to improve the engagement and enjoyment of the book recommended system and it should be further researched. / Precisionen för rekommenderingssystem har diskuterats i hög grad och användarupplevelsen för rekommenderingssystem har nu blivit ett nytt fokus. Att kombinera rekommendationer med informationsvisualisering (InfoVis) är ett möjligt vis att förbättra systemets acceptans. Denna rapport undersöker hur informationsvisualisering kan kompletterarekommenderingssystemen, med fokus på att förbättra användarglädjen och engagemanget i användarupplevelsen. Tre prototyper designas i syfte att evaluera påverkan som InfoVis, och fluid interactive InfoVis, har på engagemang och användarglädje i och med exploration av rekommenderade böcker. Produktreaktionskort och Likertfrågeformulär användes under evalueringen. Resultaten indikerar att InfoVis är en möjlig lösning för att förbättra engagemang och användargläjde i samband med bokrekommendationer, och att det bör undersökasytterligare.
125

JiangX_[MS]_Chen.pdf

Xingyu Jiang (13549585) 03 December 2023 (has links)
<p dir="ltr">Invasive species wreak havoc on global ecosystems, with negative ecological and economic consequences. Human activities, primarily stemming from globalization, trade, and increased travel, have played a significant role in accelerating species invasions. To manage and possibly mitigate these challenges, humans can harness data analysis to predict and control species invasions. Addressing this issue requires an understanding of the spatiotemporal dynamics of invasions. This research developed an innovative visualization tool designed to illustrate complex spatiotemporal data pertaining to species invasion routes. By analyzing pest invasion records spanning from 1905 to 2020, the tool presents the invasion trajectories of four non-native species in the eastern United States. Implementing spatial tools such as road networks and terrain, the visualization clarifies the spatiotemporal progression of these invasions, allowing users to intuitively determine invasion epicenters, and identify propagation pathways. Additional features enable the examination of correlations between highway systems, terrain, and invasion dynamics. Following a comprehensive training and exploration phase with domain experts, the efficacy of the tool was proven. The findings underline the proposed solution’s potential to enhance users’ comprehension of invasion dynamics, highlight intrusion centers, and indicate the influence of external factors on species expansion. This study not only validates the visualization tool’s capability but also serves as a foundation for future spatiotemporal research endeavors.</p>
126

Comparative Analysis of the Performance of ARCore and WebXR APIs for AR Applications

Shaik, Abu Bakr Rahman, Asodi, Venkata Sai Yakkshit Reddy January 2023 (has links)
Background: Augmented Reality has become a popular technology in recent years. Two of the most prominent AR APIs are ARCore, developed by Google, and We- bXR, an open standard for AR and Virtual Reality (VR) experiences on the web. A comparative analysis of the performance of these APIs in terms of CPU load, network latency, and frame rate is needed to determine which API is more suitable for cloud-based object visualisation AR applications that are integrated with Firebase. Firebase is a cloud-based backend-as-a-service platform made for app development.  Objectives: This study aims to provide a comparative analysis of the performance of the ARCore API and WebXR API for an object visualisation application integrated with Firebase Cloud Storage. The objective is to analyze and compare the performance of the APIs in terms of latency, frame rate, and CPU load to provide insights into their strengths and weaknesses and identify the key factors that may influence the choice of API for object visualisation.  Methods: To achieve the objectives, two object visualisation AR applications were developed using ARCore API and WebXR API with Firebase cloud. The frame rate, CPU load, and latency were used as performance metrics, the performance data was collected from the applications. The collected data was analysed and visualized to provide insights into the strengths and weaknesses of each API.  Results: The results of the study provided a comparative analysis of the performance of the ARCore API and WebXR API for object visualisation applications. The performance metrics of the AR applications, including frame rate, CPU load, and latency, were analyzed and visualized. WebXR API was found to be performing better in terms of CPU load and frame rate, while ARCore API was found to be performing better in terms of latency.  Conclusion: The study concluded that the WebXR API showcased advantages in terms of lower CPU load, and higher frame rates compared to the ARCore API which has reduced network latency. These results suggest that the WebXR API is more suitable for efficient and responsive object visualization in augmented reality applications.
127

Correlating nano-scale surface replication accuracy and cavity temperature in micro-injection moulding using in-line process control and high-speed thermal imaging

Baruffi, F., Gülçür, Mert,, Calaon, M., Romano, J.-M., Penchev, P., Dimov, S., Whiteside, Benjamin R., Tosello, G. 22 October 2019 (has links)
Yes / Micro-injection moulding (μIM) stands out as preferable technology to enable the mass production of polymeric components with micro- and nano-structured surfaces. One of the major challenges of these processes is related to the quality assurance of the manufactured surfaces: the time needed to perform accurate 3D surface acquisitions is typically much longer than a single moulding cycle, thus making impossible to integrate in-line measurements in the process chain. In this work, the authors proposed a novel solution to this problem by defining a process monitoring strategy aiming at linking sensitive in-line monitored process variables with the replication quality. A nano-structured surface for antibacterial applications was manufactured on a metal insert by laser structuring and replicated using two different polymers, polyoxymethylene (POM) and polycarbonate (PC). The replication accuracy was determined using a laser scanning confocal microscope and its dependence on the variation of the main μIM parameters was studied using a Design of Experiments (DoE) experimental approach. During each process cycle, the temperature distribution of the polymer inside the cavity was measured using a high-speed infrared camera by means of a sapphire window mounted in the movable plate of the mould. The temperature measurements showed a high level of correlation with the replication performance of the μIM process, thus providing a fast and effective way to control the quality of the moulded surfaces in-line. / MICROMAN project (“Process Fingerprint for Zero-defect Net-shape MICRO MANufacturing”, http://www.microman.mek.dtu.dk/) - H2020 (Project ID: 674801), H2020 agreement No. 766871 (HIMALAIA), H2020 ITN Laser4Fun (agreement No. 675063)
128

Developing the 3D imaging of Iron Age art in the ENTRANS Project

Büster, Lindsey S., Evans, Adrian A., Armit, Ian, Kershaw, Rachael 09 1900 (has links)
No / Although 3D imaging is increasingly used in archaeology as a presentational tool, advances in technology are such that its analytical potential is beginning to be realised. As part of the ENTRANS Project, 3D imaging has been undertaken on a range of Iron Age objects from museums in Slovenia and Croatia, including several items of situla art. This paper reviews the potential and limitations of various imaging techniques in relation to both presentational and analytical objectives. It considers such variables as time and resource constraints, the size and portability of objects and equipment, and the potential problems caused by past conservation. It concludes that 3D imaging, appropriately utilised, has great potential in both the analysis and presentation of Iron Age art. / HERA
129

BradPhys to BradViz or from archaeological science to heritage science

28 February 2017 (has links)
Yes / Archaeology is a broad church and its role as a “two culture” discipline is frequently cited. This position at the interface of the arts and sciences remains central to archaeological activity but there have been significant changes in the structure of archaeology and its relationship to society overall. The growth of heritage science, in particular, is driving change and development within archaeology at a national and international level. This paper discusses these developments in relation to the author's own research trajectory and discusses the significance of such change.
130

Understanding and Improving Object-Oriented Software Through Static Software Analysis

Irwin, Warwick Allan January 2007 (has links)
Software engineers need to understand the structure of the programs they construct. This task is made difficult by the intangible nature of software, and its complexity, size and changeability. Static analysis tools can help by extracting information from source code and conveying it to software engineers. However, the information provided by typical tools is limited, and some potentially rich veins of information - particularly metrics and visualisations - are under-utilised because developers cannot easily acquire or make use of the data. This thesis documents new tools and techniques for static analysis of software. It addresses the problem of generating parsers directly from standard grammars, thus avoiding the com-mon practice of customising grammars to comply with the limitations of a given parsing al-gorithm, typically LALR(1). This is achieved by a new parser generator that applies a range of bottom-up parsing algorithms to produce a hybrid parsing automaton. Consequently, we can generate more powerful deterministic parsers - up to and including LR(k) - without incurring the combinatorial explosion that makes canonical LR(k) parsers impractical. The range of practical parsers is further extended to include GLR, which was originally developed for natural language parsing but is shown here to also have advantages for static analysis of programming languages. This emphasis on conformance to standard grammars im-proves the rigour of static analysis tools and allows clearer definition and communication of derived information, such as metrics. Beneath the syntactic structure of software (exposed by parsing) lies the deeper semantic structure of declarations, scopes, classes, methods, inheritance, invocations, and so on. In this work, we present a new tool that performs semantic analysis on parse trees to produce a comprehensive semantic model suitable for processing by other static analysis tools. An XML pipeline approach is used to expose the syntactic and semantic models of the software and to derive metrics and visualisations. The approach is demonstrated producing several types of metrics and visualisations for real software, and the value of static analysis for informing software engineering decisions is shown.

Page generated in 0.0812 seconds