• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 96
  • 73
  • 48
  • 26
  • 20
  • 18
  • 13
  • 10
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • Tagged with
  • 819
  • 281
  • 221
  • 202
  • 174
  • 131
  • 121
  • 96
  • 94
  • 88
  • 85
  • 72
  • 70
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Development Of A Graphical User Interface For Composite Bridge Finite Element Analysis

Guven, Deniz 01 December 2006 (has links) (PDF)
Curved bridges with steel/concrete composite girders are used frequently in the recent years. Analysis of these structural systems presents a variety of challenges. Finite element method offers the most elaborate treatment for these systems, however its use is limited in routine design practice due to modeling requirements. In recent years, a finite element program named UTrAp was developed to analyze construction stages of curved/straight composite bridges. The original Graphical User Interface could not be used with the modified computation engine. It is the focus of this thesis work to develop a brand new Graphical User Interface with enhanced visual capabilities compatible with the engine. Pursuant to this goal a Graphical User Interface was developed using C++ programming language together with OPENGL libraries. The interface is linked to the computational engine to enable direct interaction between two programs. In the following thesis work the development of the GUI and the modifications to the computational engine are presented. Moreover, the analysis results pertaining to the newly added features are checked against analytical solutions and recommendations presented in design specifications.
242

Modeling the User Interfaces: A Component-based Interface Research for Integrating the Net-PAC Model and UML

Tsai, Shuen-Jen 06 June 2002 (has links)
Graphical user interface (GUI) has become the key element of modern information systems and is commonly viewed as one of the decisive factors for the success of an information system project. To help develop effective GUIs, many tools have been introduced by software vendors to meet the needs of designing a variety of interfaces. Such modern design tools offer system developer vehicles to create sophisticated GUI with a few codes. However, the complicity of many GUIs and the varying expectations among users, designers and developers make the communication among them and the use of most prevailing design tools a real challenge. An integrated tool for better design and development of GUIs may help alleviate the problems caused by the mis-communication and the knowledge gaps existing among users, designers and developers. In this paper, a new design tool, which integrates the GUI design techniques embedded in Unified Modeling Language (UML) and the Presentation-Abstraction-Control (PAC) model in Web environment (Net-PAC) is proposed. The potential problems of using vendor-provided design methodology will be presented. Special features of the proposed integrated tool will then be discussed. Some real-world cases using the integrated techniques will be presented to illustrate the advantages of using proposed methodology.
243

Investigation and Integration of a Scalable Vector Graphics Engine on a Set-Top Box

Johansson, Fredrik January 2008 (has links)
<p>A set top box is an embedded device, much like a computer with limited capabilities. Its main purpose is to decode a video signal and output it to a TV. The set top box market is constantly growing and to be competitive in it, a set top box has to be able to do more than only TV. One way to make an attractive product is to give it an appealing user interface. This thesis is a part of a larger work at the company to find new ways to create graphical user interfaces. Its goal is to investigate what SVG implementations that exits, which one that is most suitable for an integration attempt and then perform the integration.</p><p>Several SVG engines were investigated and one provided by the company was selected for integration. Three ways to integrate the SVG engine were identified. One of these alternatives was to extend the callback interface be- tween the engine and the underlying platform. Because of the good fit with the current architecture this alternative was chosen and implemented. As a part of this investigation a demo application suite of SVG content was also constructed.</p><p>This investigation resulted in a working integration of the chosen SVG engine on the platform. It has also showed that SVG is a suitable language to build graphical user interfaces on set top boxes.</p>
244

Automatic visual display design and creation /

Salisbury, Leslie Denise Pinnel, January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 155-162).
245

Probabilistic graphical modeling as a use stage inventory method for environmentally conscious design

Telenko, Cassandra 27 March 2013 (has links)
Probabilistic graphical models (PGMs) provide the capability of evaluating uncertainty and variability of product use in addition to correlating the results with aspects of the usage context. Although energy consumption during use can cause a majority of a product's environmental impact, common practice is to neglect operational variability in life cycle inventories (LCIs). Therefore, the relationship between a product's usage context and its environmental performance is rarely considered in design evaluations. This dissertation demonstrates a method for describing the usage context as a set of factors and representing the usage context through a PGM. The application to LCIs is demonstrated through the use of a lightweight vehicle design example. Although replacing steel vehicle parts with aluminum parts reduces the weight and can increase fuel economy, the energy invested in production of aluminum parts is much larger than that of steel parts. The tradeoff between energy investment and fuel savings is highly dependent upon the vehicle fuel economy and lifetime mileage. The demonstration PGM is constructed from relating factors such as driver behavior, alternative driving schedules, and residential density with local conditional probability distributions derived from publicly available data sources. Unique scenarios are then assembled from sets of conditions on these factors to provide insight for sources of variance. The vehicle example demonstrated that implementation of realistic usage scenarios via a PGM can provide a much higher fidelity investigation of energy savings during use and that distinct scenarios can have significantly different implications for the effectiveness of lightweight vehicle designs. Scenarios with large families, for example, yield high energy savings, especially if the vehicle is used for commuting or stop-and-go traffic conditions. Scenarios of small families and efficient driving schedules yield lower energy savings for lightweight vehicle designs. / text
246

Multi-scale error-correcting codes and their decoding using belief propagation

Yoo, Yong Seok 25 June 2014 (has links)
This work is motivated from error-correcting codes in the brain. To counteract the effect of representation noise, a large number of neurons participate in encoding even low-dimensional variables. In many brain areas, the mean firing rates of neurons as a function of represented variable, called the tuning curve, have unimodal shape centered at different values, defining a unary code. This dissertation focuses on a new type of neural code where neurons have periodic tuning curves, with a diversity of periods. Neurons that exhibit this tuning are grid cells of the entorhinal cortex, which represent self-location in two-dimensional space. First, we investigate mutual information between such multi-scale codes and the coded variable as a function of tuning curve width. For decoding, we consider maximum likelihood (ML) and plausible neural network (NN) based models. For unary neural codes, Fisher information increases with narrower tuning, regardless of the decoding method. By contrast, for the multi-scale neural code, the optimal tuning curve width depends on the decoding method. While narrow tuning is optimal for ML decoding, a finite width, matched to statistics of the noise, is optimal with a NN decoder. This finding may explain why actual neural tuning curves have relatively wide tuning. Next, motivated by the observation that multi-scale codes involve non-trivial decoding, we examine a decoding algorithm based on belief propagation (BP) because BP promises certain gains in decoding efficiency. The decoding problem is first formulated as a subset selection problem on a graph and then approximately solved by BP. Even though the graph has many cycles, BP converges to a fixed point after few iterations. The mean square error of BP approaches to that of ML at high signal-to-noise ratios. Finally, using the multi-scale code, we propose a joint source-channel coding scheme that allows separate senders to transmit complementary information over additive Gaussian noise channels without cooperation. The receiver decodes one sender's codeword using the other as side information and achieves a lower distortion using the same number of transmissions. The proposed scheme offers a new framework to design distributed joint source-channel codes for continuous variables. / text
247

High-dimensional statistics : model specification and elementary estimators

Yang, Eunho 16 January 2015 (has links)
Modern statistics typically deals with complex data, in particular where the ambient dimension of the problem p may be of the same order as, or even substantially larger than, the sample size n. It has now become well understood that even in this type of high-dimensional scaling, statistically consistent estimators can be achieved provided one imposes structural constraints on the statistical models. In spite of great success over the last few decades, we are still experiencing bottlenecks of two distinct kinds: (I) in multivariate modeling, data modeling assumption is typically limited to instances such as Gaussian or Ising models, and hence handling varied types of random variables is still restricted, and (II) in terms of computation, learning or estimation process is not efficient especially when p is extremely large, since in the current paradigm for high-dimensional statistics, regularization terms induce non-differentiable optimization problems, which do not have closed-form solutions in general. The thesis addresses these two distinct but highly complementary problems: (I) statistical model specification beyond the standard Gaussian or Ising models for data of varied types, and (II) computationally efficient elementary estimators for high-dimensional statistical models. / text
248

Exploring efficient design approaches for display of multidimensional data to facilitate interpretation of information

Pathiavadi, Chitra S 01 June 2009 (has links)
Prescriptions for effective display of quantitative information involving more than two variables are not available. To explore the effectiveness of retinal variables in facilitating the interpretation of information and decision making when used in conjunction, a study with 135 participants was conducted. The study involved the use of color shape, color value, and value shape as retinal variables in interactive displays that required participants to answer nine questions in three levels of complexity (identification of data points, analyses of local comparisons and global trends). Time-on-task scores and performance scores were measured. In addition, a View Clamp eye tracker system was used and 12 out of the 135 participants completed the task of answering questions while their eye movements were recorded. Repeated measures analysis followed by multiple comparisons of means showed that participants in the color and shape group performed significantly better and faster than color/value and shape/value groups only for questions that involved studying global trends and decision making (level 3). The shape and value group was significantly faster than color and shape group in answering level 1. Color and value as retinal variables produced results that indicated that the two variables when used in conjunction could be suitable for display of data that involved comparison. This needs to be explored further. Eye movements provided further evidence to the feature integration theory (Treisman, 1982) and showed feature search occurred right away as participants entered the display. 78% of those who reported mental strategies indicated that they identified the features used in the display first.
249

Statistical Text Analysis for Social Science

O'Connor, Brendan T. 01 August 2014 (has links)
What can text corpora tell us about society? How can automatic text analysis algorithms efficiently and reliably analyze the social processes revealed in language production? This work develops statistical text analyses of dynamic social and news media datasets to extract indicators of underlying social phenomena, and to reveal how social factors guide linguistic production. This is illustrated through three case studies: first, examining whether sentiment expressed in social media can track opinion polls on economic and political topics (Chapter 3); second, analyzing how novel online slang terms can be very specific to geographic and demographic communities, and how these social factors affect their transmission over time (Chapters 4 and 5); and third, automatically extracting political events from news articles, to assist analyses of the interactions of international actors over time (Chapter 6). We demonstrate a variety of computational, linguistic, and statistical tools that are employed for these analyses, and also contribute MiTextExplorer, an interactive system for exploratory analysis of text data against document covariates, whose design was informed by the experience of researching these and other similar works (Chapter 2). These case studies illustrate recurring themes toward developing text analysis as a social science methodology: computational and statistical complexity, and domain knowledge and linguistic assumptions.
250

Word meaning in context as a paraphrase distribution : evidence, learning, and inference

Moon, Taesun, Ph. D. 25 October 2011 (has links)
In this dissertation, we introduce a graph-based model of instance-based, usage meaning that is cast as a problem of probabilistic inference. The main aim of this model is to provide a flexible platform that can be used to explore multiple hypotheses about usage meaning computation. Our model takes up and extends the proposals of Erk and Pado [2007] and McCarthy and Navigli [2009] by representing usage meaning as a probability distribution over potential paraphrases. We use undirected graphical models to infer this probability distribution for every content word in a given sentence. Graphical models represent complex probability distributions through a graph. In the graph, nodes stand for random variables, and edges stand for direct probabilistic interactions between them. The lack of edges between any two variables reflect independence assumptions. In our model, we represent each content word of the sentence through two adjacent nodes: the observed node represents the surface form of the word itself, and the hidden node represents its usage meaning. The distribution over values that we infer for the hidden node is a paraphrase distribution for the observed word. To encode the fact that lexical semantic information is exchanged between syntactic neighbors, the graph contains edges that mirror the dependency graph for the sentence. Further knowledge sources that influence the hidden nodes are represented through additional edges that, for example, connect to document topic. The integration of adjacent knowledge sources is accomplished in a standard way by multiplying factors and marginalizing over variables. Evaluating on a paraphrasing task, we find that our model outperforms the current state-of-the-art usage vector model [Thater et al., 2010] on all parts of speech except verbs, where the previous model wins by a small margin. But our main focus is not on the numbers but on the fact that our model is flexible enough to encode different hypotheses about usage meaning computation. In particular, we concentrate on five questions (with minor variants): - Nonlocal syntactic context: Existing usage vector models only use a word's direct syntactic neighbors for disambiguation or inferring some other meaning representation. Would it help to have contextual information instead "flow" along the entire dependency graph, each word's inferred meaning relying on the paraphrase distribution of its neighbors? - Influence of collocational information: In some cases, it is intuitively plausible to use the selectional preference of a neighboring word towards the target to determine its meaning in context. How does modeling selectional preferences into the model affect performance? - Non-syntactic bag-of-words context: To what extent can non-syntactic information in the form of bag-of-words context help in inferring meaning? - Effects of parametrization: We experiment with two transformations of MLE. One interpolates various MLEs and another transforms it by exponentiating pointwise mutual information. Which performs better? - Type of hidden nodes: Our model posits a tier of hidden nodes immediately adjacent the surface tier of observed words to capture dynamic usage meaning. We examine the model based on by varying the hidden nodes such that in one the nodes have actual words as values and in the other the nodes have nameless indexes as values. The former has the benefit of interpretability while the latter allows more standard parameter estimation. Portions of this dissertation are derived from joint work between the author and Katrin Erk [submitted]. / text

Page generated in 0.0412 seconds