• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 96
  • 73
  • 48
  • 26
  • 20
  • 18
  • 13
  • 10
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • Tagged with
  • 819
  • 281
  • 221
  • 202
  • 174
  • 131
  • 121
  • 96
  • 94
  • 88
  • 85
  • 72
  • 70
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

A DEVELOPMENT OF A COMPUTER AIDED GRAPHIC USER INTERFACE POSTPROCESSOR FOR ROTOR BEARING SYSTEMS

Arise, Pavan Kumar 01 January 2004 (has links)
Rotor dynamic analysis, which requires extensive amount of data and rigorous analytical processing, has been eased by the advent of powerful and affordable digital computers. By incorporating the processor and a graphical interface post processor in a single set up, this program offers a consistent and efficient approach to rotor dynamic analysis. The graphic user interface presented in this program effectively addresses the inherent complexities of rotor dynamic analyses by linking the required computational algorithms together to constitute a comprehensive program by which input data and the results are exchanged, analyzed and graphically plotted with minimal effort by the user. Just by selecting an input file and appropriate options as required, the user can carry out a comprehensive rotor dynamic analysis (synchronous response, stability analysis, critical speed analysis with undamped map) of a particular design and view the results with several options to save the plots for further verification. This approach helps the user to modify the design of turbomachinery quickly, until an efficient design is reached, with minimal compromise in all aspects.
262

GRAPHICAL MODELING AND SIMULATION OF A HYBRID HETEROGENEOUS AND DYNAMIC SINGLE-CHIP MULTIPROCESSOR ARCHITECTURE

Zheng, Chunfang 01 January 2004 (has links)
A single-chip, hybrid, heterogeneous, and dynamic shared memory multiprocessor architecture is being developed which may be used for real-time and non-real-time applications. This architecture can execute any application described by a dataflow (process flow) graph of any topology; it can also dynamically reconfigure its structure at the node and processor architecture levels and reallocate its resources to maximize performance and to increase reliability and fault tolerance. Dynamic change in the architecture is triggered by changes in parameters such as application input data rates, process execution times, and process request rates. The architecture is a Hybrid Data/Command Driven Architecture (HDCA). It operates as a dataflow architecture, but at the process level rather than the instruction level. This thesis focuses on the development, testing and evaluation of a new graphic software (hdca) developed to first do a static resource allocation for the architecture to meet timing requirements of an application and then hdca simulates the architecture executing the application using statically assigned resources and parameters. While simulating the architecture executing an application, the software graphically and dynamically displays parameters and mechanisms important to the architectures operation and performance. The new graphical software is able to show system and node level dynamic capability of the HDCA. The newly developed software can model a fixed or varying input data rate. The model also allows fault tolerance analysis of the architecture.
263

Speeding Up Gibbs Sampling in Probabilistic Optical Flow

Piao, Dongzhen 01 December 2014 (has links)
In today’s machine learning research, probabilistic graphical models are used extensively to model complicated systems with uncertainty, to help understanding of the problems, and to help inference and predict unknown events. For inference tasks, exact inference methods such as junction tree algorithms exist, but they suffer from exponential growth of cluster size and thus is not able to handle large and highly connected graphs. Approximate inference methods do not try to find exact probabilities, but rather give results that improve as algorithm runs. Gibbs sampling, as one of the approximate inference methods, has gained lots of traction and is used extensively in inference tasks, due to its ease of understanding and implementation. However, as problem size grows, even the faster algorithm needs a speed boost to meet application requirement. The number of variables in an application graphical model can range from tens of thousands to billions, depending on problem domain. The original sequential Gibbs sampling may not return satisfactory result in limited time. Thus, in this thesis, we investigate in ways to speed up Gibbs sampling. We will study ways to do better initialization, blocking variables to be sampled together, as well as using simulated annealing. These are the methods that modifies the algorithm itself. We will also investigate in ways to parallelize the algorithm. An algorithm is parallelizable if some steps do not depend on other steps, and we will find out such dependency in Gibbs sampling. We will discuss how the choice of different hardware and software architecture will affect the parallelization result. We will use optical flow problem as an example to demonstrate the various speed up methods we investigated. An optical flow method tries to find out the movements of small image patches between two images in a temporal sequence. We demonstrate how we can model it using probabilistic graphical model, and solve it using Gibbs sampling. The result of using sequential Gibbs sampling is demonstrated, with comparisons from using various speed up methods and other optical flow methods.
264

Software Modeling in Cyber-Physical Systems

Shrestha, shilu January 2014 (has links)
A Cyber-Physical System (CPS) has a tight integration of computation, networking and physicalprocess. It is a heterogeneous system that combines multi-domain consisting of both hardware andsoftware systems. Cyber subsystems in the CPS implement the control strategy that affects the physicalprocess. Therefore, software systems in the CPS are more complex. Visualization of a complex system provides a method of understanding complex systems byaccumulating, grouping, and displaying components of systems in such a manner that they may beunderstood more efficiently just by viewing the model rather than understanding the code. Graphicalrepresentation of complex systems provides an intuitive and comprehensive way to understand thesystem. OpenModelica is the open source development environment based on Modelica modeling andsimulation language that consists of several interconnected subsystems. OMEdit is one of the subsystemintegrated into OpenModelica. It is a graphical user interface for graphical modeling. It consists of toolsthat allow the user to create their own shapes and icons for the model. This thesis presents a methodology that provides an easy way of understanding the structure andexecution of programs written in the imperative language like C through graphical Modelica model.
265

Spatiotemporal Gene Networks from ISH Images

Puniyani, Kriti 01 September 2013 (has links)
As large-scale techniques for studying and measuring gene expressions have been developed, automatically inferring gene interaction networks from expression data has emerged as a popular technique to advance our understanding of cellular systems. Accurate prediction of gene interactions, especially in multicellular organisms such as Drosophila or humans, requires temporal and spatial analysis of gene expressions, which is not easily obtainable from microarray data. New image based techniques using in-sit hybridization(ISH) have recently been developed to allowlarge-scale spatial-temporal profiling of whole body mRNA expression. However, analysis of such data for discovering new gene interactions still remains an open challenge. This thesis studies the question of predicting gene interaction networks from ISH data in three parts. First, we present SPEX2, a computer vision pipeline to extract informative features from ISH data. Next, we present an algorithm, GINI, for learning spatial gene interaction networks from embryonic ISH images at a single time step. GINI combines multi-instance kernels with recent work in learning sparse undirected graphical models to predict interactions between genes. Finally, we propose NP-MuScL (nonparanormal multi source learning) to estimate a gene interaction network that is consistent with multiple sources of data, having the same underlying relationships between the nodes. NP-MuScL casts the network estimation problem as estimating the structure of a sparse undirected graphical model. We use the semiparametric Gaussian copula to model the distribution of the different data sources, with the different copulas sharing the same covariance matrix, and show how to estimate such a model in the high dimensional scenario. We apply our algorithms on more than 100,000 Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project. Each of the 6 time steps in Drosophila embryonic development is treated as a separate data source. With spatial gene interactions predicted via GINI, and temporal predictions combined via NP-MuScL, we are finally able to predict spatiotemporal gene networks from these images.
266

Continuous Graphical Models for Static and Dynamic Distributions: Application to Structural Biology

Razavian, Narges Sharif 01 September 2013 (has links)
Generative models of protein structure enable researchers to predict the behavior of proteins under different conditions. Continuous graphical models are powerful and efficient tools for modeling static and dynamic distributions, which can be used for learning generative models of molecular dynamics. In this thesis, we develop new and improved continuous graphical models, to be used in modeling of protein structure. We first present von Mises graphical models, and develop consistent and efficient algorithms for sparse structure learning and parameter estimation, and inference. We compare our model to sparse Gaussian graphical model and show it outperforms GGMs on synthetic and Engrailed protein molecular dynamics datasets. Next, we develop algorithms to estimate Mixture of von Mises graphical models using Expectation Maximization, and show that these models outperform Von Mises, Gaussian and mixture of Gaussian graphical models in terms of accuracy of prediction in imputation test of non-redundant protein structure datasets. We then use non-paranormal and nonparametric graphical models, which have extensive representation power, and compare several state of the art structure learning methods that can be used prior to nonparametric inference in reproducing kernel Hilbert space embedded graphical models. To be able to take advantage of the nonparametric models, we also propose feature space embedded belief propagation, and use random Fourier based feature approximation in our proposed feature belief propagation, to scale the inference algorithm to larger datasets. To improve the scalability further, we also show the integration of Coreset selection algorithm with the nonparametric inference, and show that the combined model scales to large datasets with very small adverse effect on the quality of predictions. Finally, we present time varying sparse Gaussian graphical models, to learn smoothly varying graphical models of molecular dynamics simulation data, and present results on CypA protein
267

Normal Factor Graphs

Al-Bashabsheh, Ali 25 February 2014 (has links)
This thesis introduces normal factor graphs under a new semantics, namely, the exterior function semantics. Initially, this work was motivated by two distinct lines of research. One line is ``holographic algorithms,'' a powerful approach introduced by Valiant for solving various counting problems in computer science; the other is ``normal graphs,'' an elegant framework proposed by Forney for representing codes defined on graphs. The nonrestrictive normality constraint enables the notion of holographic transformations for normal factor graphs. We establish a theorem, called the generalized Holant theorem, which relates a normal factor graph to its holographic transformation. We show that the generalized Holant theorem on one hand underlies the principle of holographic algorithms, and on the other reduces to a general duality theorem for normal factor graphs, a special case of which was first proved by Forney. As an application beyond Forney's duality, we show that the normal factor graphs duality facilitates the approximation of the partition function for the two-dimensional nearest-neighbor Potts model. In the course of our development, we formalize a new semantics for normal factor graphs, which highlights various linear algebraic properties that enables the use of normal factor graphs as a linear algebraic tool. Indeed, we demonstrate the ability of normal factor graphs to encode several concepts from linear algebra and present normal factor graphs as a generalization of ``trace diagrams.'' We illustrate, with examples, the workings of this framework and how several identities from linear algebra may be obtained using a simple graphical manipulation procedure called ``vertex merging/splitting.'' We also discuss translation association schemes with the aid of normal factor graphs, which we believe provides a simple approach to understanding the subject. Further, under the new semantics, normal factor graphs provide a probabilistic model that unifies several graphical models such as factor graphs, convolutional factor graphs, and cumulative distribution networks.
268

Social Approaches to Disease Prediction

Mansouri, Mehrdad 25 November 2014 (has links)
Objective: This thesis focuses on design and evaluation of a disease prediction system that be able to detect hidden and upcoming diseases of an individual. Unlike previous works that has typically relied on precise medical examinations to extract symptoms and risk factors for computing probability of occurrence of a disease, the proposed disease prediction system is based on similar patterns of disease comorbidity in population and the individual to evaluate the risk of a disease. Methods: We combine three machine learning algorithms to construct the prediction system: an item based recommendation system, a Bayesian graphical model and a rule based recommender. We also propose multiple similarity measures for the recommendation system, each useful in a particular condition. We finally show how best values of parameters of the system can be derived from optimization of cost function and ROC curve. Results: A permutation test is designed to evaluate accuracy of the prediction system accurately. Results showed considerable advantage of the proposed system in compare to an item based recommendation system and improvements of prediction if system is trained for each specific gender and race. Conclusion: The proposed system has been shown to be a competent method in accurately identifying potential diseases in patients with multiple diseases, just based on their disease records. The procedure also contains novel soft computing and machine learning ideas that can be used in prediction problems. The proposed system has the possibility of using more complex datasets that include timeline of diseases, disease networks and social network. This makes it an even more capable platform for disease prediction. Hence, this thesis contributes to improvement of the disease prediction field. / Graduate / 0800 / 0766 / 0984 / mehrdadmansouri@yahoo.com
269

A semiotic approach to the use of metaphor in human-computer interfaces

Condon, Chris January 1999 (has links)
Although metaphors are common in computing, particularly in human-computer interfaces, opinion is divided on their usefulness to users and little evidence is available to help the designer in choosing or implementing them. Effective use of metaphors depends on understanding their role in the computer interface, which in tum means building a model of the metaphor process. This thesis examines some of the approaches which might be taken in constructing such a model before choosing one and testing its applicability to interface design. Earlier research into interface metaphors used experimental psychology techniques which proved useful in showing the benefits or drawbacks of specific metaphors, but did not give a general model of the metaphor process. A cognitive approach based on mental models has proved more successful in offering an overall model of the process, although this thesis questions whether the researchers tested it adequately. Other approaches which have examined the metaphor process (though not in the context of human-computer interaction) have come from linguistic fields, most notably semiotics, which extends linguistics to non-verbal communication and thus could cover graphical user interfaces (GUls). The main work described in this thesis was the construction of a semiotic model of human-computer interaction. The basic principle of this is that even the simplest element of the user interface will signify many simultaneous meanings to the user. Before building the model, a set of assertions and questions was developed to check the validity of the principles on which the model was based. Each of these was then tested by a technique appropriate to the type of issue raised. Rhetorical analysis was used to establish that metaphor is commonplace in command-line languages, in addition to its more obvious use in GUIs. A simple semiotic analysis, or deconstruction, of the Macintosh user interface was then used to establish the validity of viewing user interfaces as semiotic systems. Finally, an experiment was carried out to test a mental model approach proposed by previous researchers. By extending their original experiment to more realistically complex interfaces and tasks and using a more typical user population, it was shown that users do not always develop mental models of the type proposed in the original research. The experiment also provided evidence to support the existence of multiple layers of signification. Based on the results of the preliminary studies, a simple means of testing the semiotic model's relevance to interface design was developed, using an interview technique. The proposed interview technique was then used to question two groups of users about a simple interface element. Two independent researchers then carried out a content analysis of the responses. The mean number of significations in each interview, as categorised by the researchers, was 15. The levels of signification were rapidly revealed, with the mean time for each interview being under two minutes, providing effective evidence that interfaces signify many meanings to users, a substantial number of which are easily retrievable. It is proposed that the interview technique could provide a practical and valuable tool for systems analysis and interface designers. Finally, areas for further research are proposed, in particular to ascertain how the model and the interview technique could be integrated with other design methods.
270

Generating programming environments with integrated text and graphics for VLSI design systems

McCaskill, George Alexander January 1987 (has links)
The constant improvements in device integration, the development of new technologies and the emergence of new design techniques call for flexible, maintainable and robust software tools. The generic nature of compiler-compiler systems, with their semi-formal specifications, can help in the construction of those tools. This thesis describes the Wright editor generator which is used in the synthesis of language-based graphical editors (LBGEs). An LBGE is a programming environment where the programs being manipulated denote pictures. Editing actions can be specified through both textual and graphical interfaces. Editors generated by the Wright system are specified using the formalism of attribute grammars. The major example editor in this thesis, Stick-Wright, is a design entry system for the construction of VLSI circuits. Stick-Wright is a hierarchical symbolic layout editor which exploits a combination of text and graphics in an interactive environment to provide the circuit designer with a tool for experimenting with circuit topologies. A simpler system, Pict-Wright: a picture drawing system, is also used to illustrate the attribute grammar specification process. This thesis aims to demonstrate the efficacy of formal specification in the generation of software-tools. The generated system Stick-Wright shows that a text/graphic programming environment can form the basis of a powerful VLSI design tool, especially with regard to providing the designer with immediate graphical feedback. Further applications of the LBGE generator approach to system design are given for a range of VLSI design activities.

Page generated in 0.063 seconds