• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 96
  • 73
  • 47
  • 26
  • 20
  • 18
  • 12
  • 10
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • Tagged with
  • 814
  • 279
  • 221
  • 200
  • 173
  • 131
  • 121
  • 96
  • 91
  • 88
  • 85
  • 72
  • 67
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Usability Engineering Applied to an Electromagnetic Modeling Tool

Fortson, Samuel King 19 July 2012 (has links)
There are very few software packages for model-building and visualization in electromagnetic geophysics, particularly when compared to other geophysical disciplines, such as seismology. The purpose of this thesis is to design, develop, and test a geophysical model-building interface that allows users to parameterize the 2D magnetotellurics problem. Through the evaluation of this interface, feedback was collected from a usability specialist and a group of geophysics graduate students to study the steps users take to work through the 2D forward-modeling problem, and to analyze usability errors encountered while working with the interface to gain a better understanding of how to build a more effective interface. Similar work has been conducted on interface design in other fields, such as medicine and consumer websites. Usability Engineering is the application of a systematic set of methods to the design and development of software with the goal of making the software more learnable, easy to use, and accessible. Two different Usability Engineering techniques — Heuristic Evaluation and Thinking Aloud Protocol — were involved in the evaluation of the interface designed in this study (FEM2DGUI). Heuristic Evaluation is a usability inspection method that employs a usability specialist to detect errors based on a known set of guidelines and personal experience. Thinking Aloud Protocol is a usability evaluation method where potential end-users are observed as they verbalize their every step as they work through specific scenarios with an interface. These Usability Engineering methods were combined in a effort to understand how the first prototype of FEM2DGUI could be refined to make it more usable and to understand how end-users work through the forward-modeling problem. The Usability Engineering methods employed in this project uncovered multiple usability errors that were corrected through a refinement of the interface. Discovery of these errors helped with refining the system to become more robust and usable, which is believed to aid users in more efficient model-building. Because geophysical model-building is inherently a difficult task, it is possible that other model-building graphical user interfaces could benefit from the application of Usability Engineering methods, such as those presented in this research.â / Master of Science
72

Three Essays on Digital Annual Reports for Nonprofessional Investors: The Impacts of Presentation Formats on Investment-Related Judgments and Decisions

Zhang (James), Yibo 21 March 2018 (has links)
The goal of this dissertation is to investigate the impact of presentation formats on nonprofessional investors’ impressions of firm performance in the context of digital annual reports. The dissertation implements a three-essay approach. Essay 1 examines whether the effect of positive/negative financial performance news on nonprofessional investors’ impressions of management and firm performance depends on whether the graphical display of that news is vivid or pallid. Conducting a 2 x 2 between-participants experiment with 470 participants from Amazon Mechanical Turk (M-Turk), I find that when the news is positive, presenting graphs vividly allows nonprofessional investors to have a more positive impression of management and firm performance. In contrast, when the news is negative, presenting graphs vividly has little effect on nonprofessional investors’ impressions. The essay informs regulators and practice by demonstrating that vivid graphical website disclosures can significantly affect the behavior of nonprofessional investors when the financial performance news is positive, but the effect is minimal when the news is negative. The essay also contributes to the financial disclosure literature by demonstrating the impact of graphical vividness in presenting financial performance information. Essay 2 conducts a 2 x 2 between-participants experiment with 565 participants from M-Turk. I investigate whether varying the user interactivity and graphical vividness of the presentation of non-financial good news counteracts bad news presented in the audited financial data. I find a positive effect of user interactivity when the graphical presentation of non-financial information is vivid but not when it is pallid. In mediation analyses, I find unexpected results in that user engagement negatively mediates the effects of user interactivity on nonprofessional investors’ perceptions of firm performance and investment-related judgments and decisions. Subsequent analyses indicate that user interactivity alone reduces nonprofessional investors’ satisfaction with digital annual reports, but the joint effect of user interactivity and graphical vividness overcomes this negative effect. These results have implications for designers of digital annual reports, investor groups consuming this information, and regulators concerned about the need for assurance on the (unregulated) non-financial disclosures in annual reports. Essay 3 studies whether using hyperlinks that connect summarized financial graphs with detailed financial statement information reduces the effect of graphical distortions on nonprofessional investors’ perceptions of firm performance. Using 385 participants from M-Turk, I find that while distorted graphs do bias nonprofessional investors’ perceptions of firm performance, the provision and use of hyperlinks to the underlying source information eliminate those effects (i.e., debias). Using the dual-process theory of cognitive processing (Kahneman and Frederick 2002; Evans 2006, 2008), I find that hyperlinks enhance the overriding effect of System 2 processing (i.e., analytical processing) on System 1 processing (i.e., intuitive processing) and indirectly reduce the decision-biasing effect of distorted graphs on nonprofessional investors’ perceptions. The study contributes to standard setting as well as financial reporting practice by providing empirical evidence that the SEC’s policy guidance on implementing hyperlinks has benefits to nonprofessional investors. Second, it contributes to both the literature on distorted graphs and hyperlinks by suggesting hyperlinking to source data as a technique to mitigate the effects of graphical distortions. The findings of the three essays have implications for the designers of digital annual reports, investor groups consuming this information, and regulators concerned about the need to standardize the presentation formats in digital annual reports and potentially require auditor oversight of graphical displays of both financial and non-financial data in these reports.
73

The Influence of Science Textbook Graphical Design on Learning Performance of Fifth Graders

Wang, Ya-ting 05 August 2011 (has links)
This study aimed to investigate the influence of science textbook graphical design based on principles of cognitive process on graphical-textual learning performance of different academic achievement fifth graders students. A quasi-experimental design mainly and auxiliary to half-structure questionnaire and interview, fifty-seven elementary students participated in this research. The experiment treatment was students in different classes used different graphical science textbook (one was based on cognitive principles, and another was published by civil). The influences of learning performance compared to published science textbook were assessed using self-developed tests and the qualitative data were analyzed using independent sample t-test. Administration of a student graph perceptive questionnaire and then nine students from high and low achievement groups in each class were interviewed; qualitative data analyzed to infer the mental process. The results showed:(1) High academic achievement students were significantly enhanced leaning performances on knowledge dimension and cognitive process dimension and had strong effect size, the experimental group was better than the contol group. As to low academic achievement students, there was no significant different between these two groups. (2) Schema organizational graph enhance high academic achievement students on conceptual knowledge and assist low academic achievement students recognize material outline; experimental transformational graph enhance high academic achievement students on procedural knowledge and assist low academic achievement students participate in the experiment.
74

Apprentissage de graphes structuré et parcimonieux dans des données de haute dimension avec applications à l’imagerie cérébrale / Structured Sparse Learning on Graphs in High-Dimensional Data with Applications to NeuroImaging

Belilovsky, Eugene 02 March 2018 (has links)
Cette thèse présente de nouvelles méthodes d’apprentissage structuré et parcimonieux sur les graphes, ce qui permet de résoudre une large variété de problèmes d’imagerie cérébrale, ainsi que d’autres problèmes en haute dimension avec peu d’échantillon. La première partie de cette thèse propose des relaxation convexe de pénalité discrète et combinatoriale impliquant de la parcimonie et bounded total variation d’un graphe, ainsi que la bounded `2. Ceux-ci sont dévelopé dansle but d’apprendre un modèle linéaire interprétable et on démontre son efficacacité sur des données d’imageries cérébrales ainsi que sur les problèmes de reconstructions parcimonieux.Les sections successives de cette thèse traite de la découverte de structure sur des modèles graphiques “undirected” construit à partir de peu de données. En particulier, on se concentre sur des hypothèses de parcimonie et autres hypothèses de structures dans les modèles graphiques gaussiens. Deux contributions s’en dégagent. On construit une approche pour identifier les différentes entre des modèles graphiques gaussiens (GGMs) qui partagent la même structure sous-jacente. On dérive la distribution de différences de paramètres sous une pénalité jointe quand la différence des paramètres est parcimonieuse. On montre ensuite comment cette approche peut être utilisée pour obtenir des intervalles de confiances sur les différences prises par le GGM sur les arêtes. De là, on introduit un nouvel algorithme d’apprentissage lié au problème de découverte de structure sur les modèles graphiques non dirigées des échantillons observés. On démontre que les réseaux de neurones peuvent être utilisés pour apprendre des estimateurs efficacaces de ce problèmes. On montre empiriquement que ces méthodes sont une alternatives flexible et performantes par rapport aux techniques existantes. / This dissertation presents novel structured sparse learning methods on graphs that address commonly found problems in the analysis of neuroimaging data as well as other high dimensional data with few samples. The first part of the thesis proposes convex relaxations of discrete and combinatorial penalties involving sparsity and bounded total variation on a graph as well as bounded `2 norm. These are developed with the aim of learning an interpretable predictive linear model and we demonstrate their effectiveness on neuroimaging data as well as a sparse image recovery problem.The subsequent parts of the thesis considers structure discovery of undirected graphical models from few observational data. In particular we focus on invoking sparsity and other structured assumptions in Gaussian Graphical Models (GGMs). To this end we make two contributions. We show an approach to identify differences in Gaussian Graphical Models (GGMs) known to have similar structure. We derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. We then show how this approach can be used to obtain confidence intervals on edge differences in GGMs. We then introduce a novel learning based approach to the problem structure discovery of undirected graphical models from observational data. We demonstrate how neural networks can be used to learn effective estimators for this problem. This is empirically shown to be flexible and efficient alternatives to existing techniques.
75

A GRAPHICAL USER INTERFACE MIMO CHANNEL SIMULATOR

Panagos, Adam G., Kosbar, Kurt 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / Multiple-input multiple-output (MIMO) communication systems are attracting attention because their channel capacity can exceed single-input single-output systems, with no increase in bandwidth. While MIMO systems offer substantial capacity improvements, it can be challenging to characterize and verify their channel models. This paper describes a software MIMO channel simulator with a graphical user interface that allows the user to easily investigate a number of MIMO channel characteristics for a channel recently proposed by the 3rd Generation Partnership Project (3GPP).
76

TELEMETRY SYSTEMS FOR THE 90’s: GRAPHICAL USER INTERFACES WITH PROGRAMMABLE BEHAVIOR

10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / The design and development of user interfaces for telemetry data processing systems is undergoing a period of rapid change. The migration to graphics workstations is raising expectations and redefining requirements for user interfaces in the nineties. User interfaces which present data in crude tabular form on alphanumeric terminals are on a path to extinction. Modem telemetry user interfaces are hosted on graphics workstations rich with power and software tools. This paper summarizes the evolution of user interfaces for telemetry systems developed by Computer Sciences Corporation, highlighting key enhancements and use of third-party software. The benefits of prototyping and the trend toward programmable interface behavior are explored.
77

Machine Learning in Computational Biology: Models of Alternative Splicing

Shai, Ofer 03 March 2010 (has links)
Alternative splicing, the process by which a single gene may code for similar but different proteins, is an important process in biology, linked to development, cellular differentiation, genetic diseases, and more. Genome-wide analysis of alternative splicing patterns and regulation has been recently made possible due to new high throughput techniques for monitoring gene expression and genomic sequencing. This thesis introduces two algorithms for alternative splicing analysis based on large microarray and genomic sequence data. The algorithms, based on generative probabilistic models that capture structure and patterns in the data, are used to study global properties of alternative splicing. In the first part of the thesis, a microarray platform for monitoring alternative splicing is introduced. A spatial noise removal algorithm that removes artifacts and improves data fidelity is presented. The GenASAP algorithm (generative model for alternative splicing array platform) models the non-linear process in which targeted molecules bind to a microarray’s probes and is used to predict patterns of alternative splicing. Two versions of GenASAP have been developed. The first uses variational approximation to infer the relative amounts of the targeted molecules, while the second incorporates a more accurate noise and generative model and utilizes Markov chain Monte Carlo (MCMC) sampling. GenASAP, the first method to provide quantitative predictions of alternative splicing patterns on large scale data sets, is shown to generate useful and precise predictions based on independent RT-PCR validation (a slow but more accurate approach to measuring cellular expression patterns). In the second part of the thesis, the results obtained by GenASAP are analysed to reveal jointly regulated genes. The sequences of the genes are examined for potential regulatory factors binding sites using a new motif finding algorithm designed for this purpose. The motif finding algorithm, called GenBITES (generative model for binding sites) uses a fully Bayesian generative model for sequences, and the MCMC approach used for inference in the model includes moves that can efficiently create or delete motifs, and extend or contract the width of existing motifs. GenBITES has been applied to several synthetic and real data sets, and is shown to be highly competitive at a task for which many algorithms already exist. Although developed to analyze alternative splicing data, GenBITES outperforms most reported results on a benchmark data set based on transcription data.
78

Message Passing Algorithms for Facility Location Problems

Lazic, Nevena 09 June 2011 (has links)
Discrete location analysis is one of the most widely studied branches of operations research, whose applications arise in a wide variety of settings. This thesis describes a powerful new approach to facility location problems - that of message passing inference in probabilistic graphical models. Using this framework, we develop new heuristic algorithms, as well as a new approximation algorithm for a particular problem type. In machine learning applications, facility location can be seen a discrete formulation of clustering and mixture modeling problems. We apply the developed algorithms to such problems in computer vision. We tackle the problem of motion segmentation in video sequences by formulating it as a facility location instance and demonstrate the advantages of message passing algorithms over current segmentation methods.
79

A Probabilistic Approach to Image Feature Extraction, Segmentation and Interpretation

Pal, Chris January 2000 (has links)
This thesis describes a probabilistic approach to imagesegmentation and interpretation. The focus of the investigation is the development of a systematic way of combining color, brightness, texture and geometric features extracted from an image to arrive at a consistent interpretation for each pixel in the image. The contribution of this thesis is thus the presentation of a novel framework for the fusion of extracted image features producing a segmentation of an image into relevant regions. Further, a solution to the sub-pixel mixing problem is presented based on solving a probabilistic linear program. This work is specifically aimed at interpreting and digitizing multi-spectral aerial imagery of the Earth's surface. The features of interest for extraction are those of relevance to environmental management, monitoring and protection. The presented algorithms are suitable for use within a larger interpretive system. Some results are presented and contrasted with other techniques. The integration of these algorithms into a larger system is based firmly on a probabilistic methodology and the use of statistical decision theory to accomplish uncertain inference within the visual formalism of a graphical probability model.
80

Towards cognitive support in knowledge engineering : an adoption-centred customization framework for visual interfaces

Ernst, Neil A. 10 April 2008 (has links)
No description available.

Page generated in 0.0316 seconds