• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 551
  • 51
  • 43
  • 41
  • 32
  • 29
  • 27
  • 20
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Efficient integration of software components for scientific simulations

Putanowicz, Roman January 2007 (has links)
The advances in multi-physics and multi-scale scientific simulations are the incentive for research in new ways of handling the development of complex simulation codes. The paradigm of component programming and Grid computing set a new level of requirements for simulation codes in terms of the interaction between highly heterogeneous components and the adoption of new codes. This thesis stresses the need for the development of software integratio~ techniques for scientific simulation codes. Ensuring component interoperability allows not only the building of more powerful programs but, as it, is strongly stressed in this thesis, helps to lower the cost of the verification and validation of simulation programs. This thesis introduces the notion of a hybrid simulation system as a system consisting of generic system programming language libraries, scripting language in. terpreter, interface modules between scripting language and system language components and interface generation' tools. It is argued that hybrid are to be the most appropriate environment for the development of academic simulation codes. The main contribution of this thesis is the idea of Grid and Geometry Exchange Services (GAGES) as an example of a hybrid system in the domain of pre- and postprocessing of scientific simulations. The case studies undertaken to support claims about GAGES and hybrid systems yielded several practical results, for instance an anisotropic mesh generator, a surface mesh generator, a grid plotting library and new tools for multi-language programming. A posteriori analysis of the development efforts resulted in another original idea of the usage of a SWIG compiler interface specification as a universal scientific interface description language.
92

An intelligent automated diagnostic architecture for industrial applications

Zaki, Osama Farouk January 2007 (has links)
No description available.
93

Development of variable structure observers and their integration with the unscented kalman filter

Ongkosutjahjo, Martin January 2009 (has links)
No description available.
94

Computational-linguistic approaches to biological text mining

Clegg, Andrew Brian January 2008 (has links)
No description available.
95

An expert system to assist in design

Taylor, Nicholas Kenelm January 1990 (has links)
No description available.
96

Vice : an interface designed for complex engineering software : an application of virtual reality

Taylor, Mark John January 2005 (has links)
Concurrent Engineering has been taking place within the manufacturing industry for many years whereas the construction industry has until recently continued using the 'over the wall' approach where each task is completed before the next began. For real concurrent engineering in construction to take place there needs to be true collaborative working between client representatives, construction professionals, suppliers and subcontractors. The aim of this study was to design, develop and test a new style of user interface which promotes a more intuitive form of interaction than the standard desktop metaphor based interface. This new interface has been designed as an alternative for the default interface of the INTEGRA system and must also promote enhanced user collaboration. By choosing alternative metaphors that are more obvious to the user it is postulated that it should be possible for such an interface to be developed. Specific objectives were set that would allow the project aim to be fulfilled. These objectives are outlined below: To gain a better understanding of the requirements of successful concurrent engineering particularly at the conceptual design phase. To complete a thorough review of the current interfaces had to take place including any guidelines on how to create a "good user interface". To experience many of the collaboration systems available today so that an informed choice of application can be made. To learn the relevant skills required to design, produce and implement the interface of choice. To perform a user evaluation of the finished user interface to improve overall usability and further streamline the concurrent conceptual design. The user interface developed used a virtual reality environment to create a metaphor of an office building. Project members could then coexist and interact within the building promoting collaboration and at the same time have access to the remaining INTEGRA tools. The user evaluation proved that the Virtual Integrated Collaborative Environment (VICE) user interface was a successful addition to the INTEGRA system. The system was evaluated by a substantial number of different users which validates this finding. The user evaluation also provided positive results from two different demographics concluding that the system was easy, intuitive to use with the necessary functionality. Using metaphor based user interfaces is not a new concept. It has become standard practise for most software developers. There are arguments for and against these types of user interfaces. Some advanced users will argue that having such an interface limits their ability to make full use of the applications. However the majority of users do not come within this bracket and for them, metaphor based user interfaces are very useful. This is again evident from the user evaluation.
97

Data mining and integration of heterogeneous bioinformatics data sources

Al-Mutairy, Badr January 2008 (has links)
In this thesis, we have presented a novel approach to interoperability based on the use of biological relationships that have used relationship-based integration to integrate bioinformatics data sources; this refers to the use of different relationship types with different relationship closeness values to link gene expression datasets with other information available in public bioinformatics data sources. These relationships provide flexible linkage for biologists to discover linked data across the biological universe. Relationship closeness is a variable used to measure the closeness of the biological entities in a relationship and is a characteristic of the relationship. The novelty of this approach is that it allows a user to link a gene expression dataset with heterogeneous data sources dynamically and flexibly to facilitate comparative genomics investigations. Our research has demonstrated that using different relationships allows biologists to analyze experimental datasets in different ways, shorten the time needed to analyze the datasets and provide an easier way to undertake this analysis. Thus, it provides more power to biologists to do experimentations using changing threshold values and linkage types. This is achieved in our framework by introducing the Soft Link Model (SLM) and a Relationship Knowledge Base (RKB), which is built and used by SLM. Integration and Data Mining Bioinformatics Data sources system (IDMBD) is implemented as a proof of concept prototype to demonstrate the technique of linkages described in the thesis.
98

Application of improved automated text mining to transcriptome datasets

Leong, Hui Sun January 2009 (has links)
A major challenge in microarray data analysis is the functional interpretation of gene lists. A common approach to address this is over-representation analysis (ORA), which uses the hypergeometric test (or its variants) to evaluate whether a particular functionally-defined group of genes is represented more than expected by chance within a gene list. Existing applications of ORA have been largely limited to controlled vocabularies such as Gene Ontology (GO) terms and KEGG pathways. Therefore, this work aims at determining whether ORA can be applied to a wider mining of free-text. Initial explorations using the classical hypergeometric distribution to analyse tokens from PubMed abstracts revealed a hitherto unexpected feature: gene lists derived from typical microarray experiment tend to have more annotation (PubMed abstracts) associated with them than would be expected by chance. This bias, a result of patterns of research activity within the biomedical community, is a major problem for the classical hypergeometric test-based ORA approach, as it cannot account for such bias. The negative effect of annotation bias is a marked over-representation of many common (and likely uninformative) terms, interspersed with terms that appear to convey real biological insight. Several solutions have been developed to address this issue. The first is based on the use of a permutation test, but this nonparametric approach is hampered by being computationally intensive. Two computationally tractable approaches were subsequently developed, which are based on the detection of outliers and the extended hypergeometric distribution. The performances of the proposed text-based ORA approaches were demonstrated on a wide range of published datasets covering different species. A comparison with existing tools that use GO terms suggests that mining PubMed abstracts can reveal additional biological insight that may not be possible by mining pre-defined ontologies alone.
99

Low-discrepancy point sampling of 2D manifolds for visual computing

Quinn, Jonathan Alexander January 2009 (has links)
Point distributions are used to sample surfaces for a wide variety of applications within the fields of graphics and computational geometry, such as point-based graphics, remeshing and area/volume measurement. The quality of such point distributions is important, and quality criteria are often application dependent. Common quality criteria include visual appearance, an even distribution whilst avoiding aliasing and other artifacts, and minimisation of the number of points required to accurately sample a surface. Previous work suggests that discrepancy measures the uniformity of a point distribution and hence a point distribution of minimal discrepancy is expected to be of high quality. We investigate discrepancy as a measure of sampling quality, and present a novel approach for generating low-discrepancy point distributions on parameterised surfaces. Our approach uses the idea of converting the 2D sampling problem into a ID problem by adaptively mapping a space-filling curve onto the surface. A ID sequence is then generated and used to sample the surface along the curve. The sampling process takes into account the parametric mapping, employing a corrective approach similar to histogram equalisation, to ensure that it gives a 2D low-discrepancy point distribution on the surface. The local sampling density can be controlled by a user-defined density function, e.g. to preserve local features, or to achieve desired data reduction rates. Experiments show that our approach efficiently generates low-discrepancy distributions on arbitrary parametric surfaces, demonstrating nearly as good results as popular low-discrepancy sampling methods designed for particular surfaces like planes and spheres. We develop a generalised notion of the standard discrepancy measure, which considers a broader set of sample shapes used to compute the discrepancy. In this more thorough testing, our sampling approach produces results superior to popular distributions. We also demonstrate that the point distributions produced by our approach closely adhere to the blue noise criterion, compared to the popular low-discrepancy methods tested, which show high levels of structure, undesirable for visual representation. Furthermore, we present novel sampling algorithms to generate low-discrepancy distributions on triangle meshes. To sample the mesh, it is cut into a disc topology, and a parameterisation is generated. Our sampling algorithm can then be used to sample the parameterised mesh, using robust methods for computing discrete differential properties of the surface. After these pre-processing steps, the sampling density can be adjusted in real-time. Experiments also show that our sampling approach can accurately resample existing meshes with low discrepancy, demonstrating error rates when reducing the mesh complexity as good as the best results in the literature. We present three applications of our mesh sampling algorithm. We first describe a point- based graphics sampling approach, which includes a global hole-filling algorithm. We investigate the coverage of sample discs for this approach, demonstrating results superior to random sampling and a popular low-discrepancy method. Moreover, we develop levels of detail and view dependent rendering approaches, providing very fine-grained density control with distance and angle, and silhouette enhancement. We further discuss a triangle- based remeshing technique, producing high quality, topologically unaltered meshes. Finally, we describe a complete framework for sampling and painting engineering prototype models. This approach provides density control according to surface texture, and gives full dithering control of the point sample distribution. Results exhibit high quality point distributions for painting that are invariant to surface orientation or complexity. The main contributions of this thesis are novel algorithms to generate high-quality density- controlled point distributions on parametric surfaces and triangular meshes. Qualitative assessment and discrepancy measures and blue noise criteria show their high sampling quality in general. We introduce generalised discrepancy measures which indicate that the sampling quality of our approach is superior to other low-discrepancy sampling techniques. Moreover, we present novel approaches towards remeshing, point-based rendering and robotic painting of prototypes by adapting our sampling algorithms and demonstrate the overall good quality of the results for these specific applications.
100

Theoretical modelling of the MISS structure in one and two dimensions

Lavelle, Stephen William January 1989 (has links)
In this thesis, a 1-D computer model of MISS operation is described. This is then used to characterise the qualitative behaviour of the MISS for changes in its structural parameters. The modelled device is assumed to have a four layer MInp structure, commonly used by both theorists and experimentalists. Derived from this 1-D model is a quantitative description of switching in the three layer MIS diode, using a heavily doped (lO(^17)cm(^-3)) n-type substrate. Results are then presented describing its behaviour for changes in fabrication parameters. The computer model of MISS functioning is extended into quasi 2-D by incorporating current spreading in the pn region of the device. Using this, the effect of changes in metal top contact area on device behaviour are explained, with the model providing an accurate quantitative description of these effects for thick oxide (30A) devices. The stability of the MISS as a circuit element is examined in its negative impedance region. A simple equivalent circuit model is produced, and calculated values for negative differential capacitance and negative differential resistances from the quasi 2-D MISS and MIS diode models are used to characterise device behaviour in this region. Within the work a number of accepted terms and ideas are challenged, with their uses being either redefined or discarded. This has been found to be necessary because of the scope of the work presented, which covers such a large range of device parameters.

Page generated in 0.045 seconds