• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 74
  • 44
  • Tagged with
  • 205
  • 205
  • 185
  • 185
  • 107
  • 106
  • 23
  • 21
  • 19
  • 19
  • 17
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dislocation-based continuum models of crystal plasticity on the micron scale

Nikitas, Nikolaos January 2009 (has links)
The miniaturization trends on electronic components manufacturing, have challenged conventional knowledge on materials strength and deformation behavior. ”The smaller the stronger” has become a commonplace expression summarizing a multitude of experimental findings in micro-scale plasticity, and modelling tools capable of capturing this distinctive reality are in urgent demand. The thesis investigates the ubiquitous size effects in plastic deformation of micron-scale specimens. Tracing the source of such a behavior to the constituent elements of plastic deformation, we use as starting point the dynamics of discrete dislocations and try to embody them into a continuum framework. The thesis is structured in two independent parts. In the first part the question why size effects occur in constrained geometries is addressed. A systematic investigation of the connection between internal and external length scales is carried out in a system where dislocations, in the form of continuous lines embedded in a threedimensional isotropic medium, move, expand, interact, and thus create plastic distortion on the deforming body. Our modelling strategy utilizes a set of deterministic evolution equations on dislocation densities for describing the stress-driven evolution of the material’s internal state. These transport-like equations simultaneously serve the role of constitutive laws describing the deformation of the stressed body. Subsequent application to three benchmark problems is found to give good agreement both with experiment and discrete dislocation dynamics simulation. The second part of this thesis focuses on the heterogeneity and intermittency of deformation processes on the micro scale. Recent experimental results question the concept of smooth and homogeneous plastic flow with fluctuations that average out above a certain scale. Bursts of activity, which follow power-law size distributions and produce long-range correlated deformation patterns, seem to pertain even on scales far greater than the atomic one. In short, plasticity in this view appears as a ’crackling noise’ phenomenon similar to other irregular and burst-like processes such as earthquakes or granular avalanches. But then why do we witness smooth stress-strain curves on macroscopic sample testing? Concepts originating from Self Organized Criticality and pinning theories are employed for producing an efficient continuum description which is then used to study the effect of intrinsic and extrinsic deformation parameters on the fluctuation phenomena. It is deduced that hardening, load driving and specimen size, are all decisive on constraining fluctuating behavior, and limits of classical theory’s applicability can be drawn.
2

Algorithms for sensor validation and multisensor fusion

Wellington, Sean January 2002 (has links)
Existing techniques for sensor validation and sensor fusion are often based on analytical sensor models. Such models can be arbitrarily complex and consequently Gaussian distributions are often assumed, generally with a detrimental effect on overall system performance. A holistic approach has therefore been adopted in order to develop two novel and complementary approaches to sensor validation and fusion based on empirical data. The first uses the Nadaraya-Watson kernel estimator to provide competitive sensor fusion. The new algorithm is shown to reliably detect and compensate for bias errors, spike errors, hardover faults, drift faults and erratic operation, affecting up to three of the five sensors in the array. The inherent smoothing action of the kernel estimator provides effective noise cancellation and the fused result is more accurate than the single 'best sensor'. A Genetic Algorithm has been used to optimise the Nadaraya-Watson fuser design. The second approach uses analytical redundancy to provide the on-line sensor status output μH∈[0,1], where μH=1 indicates the sensor output is valid and μH=0 when the sensor has failed. This fuzzy measure is derived from change detection parameters based on spectral analysis of the sensor output signal. The validation scheme can reliably detect a wide range of sensor fault conditions. An appropriate context dependent fusion operator can then be used to perform competitive, cooperative or complementary sensor fusion, with a status output from the fuser providing a useful qualitative indication of the status of the sensors used to derive the fused result. The operation of both schemes is illustrated using data obtained from an array of thick film metal oxide pH sensor electrodes. An ideal pH electrode will sense only the activity of hydrogen ions, however the selectivity of the metal oxide device is worse than the conventional glass electrode. The use of sensor fusion can therefore reduce measurement uncertainty by combining readings from multiple pH sensors having complementary responses. The array can be conveniently fabricated by screen printing sensors using different metal oxides onto a single substrate.
3

Software engineering process modelling analysis

Wang, Yingxu January 1998 (has links)
Studies of the software engineering process are a natural extension of scope from that of conventional software development methodologies meeting the requirement for engineering large-scale software development. A new approach for dealing with the difficulties of large-scale software development in software engineeering is to establish an appropriate software engineering process system. This work researches into software engineering process system modelling and analysis. The delivered aims of this thesis are to investigate empirical software engineering process research and practices to solve current problems identified in software engineering process system modelling, to integrate current process models into a well founded and unifying framework and to lay theoretical foundations for the software engineering process as a formalised discipline.
4

Improved multidimensional digital filter algorithms using systolic structures

Hu, Zhijian January 1996 (has links)
This work begins by explaining the issues in systolic array design. It continues by defining the criteria used in evaluating the quality of a design and its performance. An important feature of the approach taken in seeking to improve systolic systems has been the choice of target funtions. The rationale for these choices is explained and an underlying set of unifying key criteria are outlined which have been the basis of the design objectives in every case. In order to quantify improvements it is necessary to fully explore and document the current state of the art. This has been done by considering the best performing systems in each area of interest. One of the unifying principles for the research has been the derivation of all original and new designs from transfer functions. The detailed methods for mapping DSP algorithms systolic arrays are explored in word and bit level systems for multi-dimensional and median filters. The potential for improvement in the performance of systolic system implementation resides in two areas: improvement in the architectural structures of the arrays; and improvements in the speed and throughput of the processing elements. The programme of research has resulted in both these areas being addressed. In all, six new relaisatiions of two dimensional FIR and IIR filters are presented along with two new structures for the median filter. Additionally, a hybrid opto-electronic processing element has been devised which applies Fabry-Perrot resonators in a novel way. The basic adder structure is fully developed to demonstrate a high speed multiplier capability. An important issue for this research has been the verification of the correctness of designs and a confirmation of the efficacy of the theoretical calculated performances. The approach taken has been a two stage one in which a new circuit is first modelled at the behavioural level using the ELLA hardware description language. Having verified behavioural compliance the next stage is to model the system as a low level logic structure. This verifies the precise structures. The Mentor graphics architectural design tools were used for this purpose. In final impelementation as VLSI there would be a need to take into account chip layout related issues and these are discussed. The verification strategy of identifying and testing key structures is justified and evidence of successful stimulation is provided. The results are discussed in the context of comparing parameters of the new cirsuits with those of the previously best existing designs. The parameters tabulated are: data throughput rate; circuit latency; and circuit size (area). It is concluded that improvements are evident in the new designs and that they are highly regular structures with simple timing and control thus making them attractive for VLSI implementation. In summary, the new and original structures provide a better balance between cost and complexity. The new processing element system is theoretically capabale of operation in region of 4 nanoseconds per addition and new algorithm for median filtering promises a sharp improvement in speed.
5

Netting the symbol : analytically deriving recursive connectionist representation of symbol structures

Callan, R. E. January 1996 (has links)
With the huge research effort into connectionist systems that has taken place over the last decade a debate has developed as to whether the more traditional Artificial Intelligence (AI) paradigm of symbolism or the connectionist paradigm offers the way ahead to developing high level cognitive systems. Central to the debate are issues of representation. Traditional AI has spent many years developing representation languages and representation has long been seen as essential for the development of intelligent systems. Early connectionists have tended to rely on the notion that a network of simple processing units will develop adequate internal representations as a by product of learning. Indeed, with connectionism it would appear on first sight that the development of a representational formalism is somewhat intractable when knowledge is implicit in a distributed pattern of activity. Contrary to this view, some connectionists have agreed with the traditionalists that the mechanism of represnetation must support compositional construction and be understood. Some connectionists would even go as far to say that the representation mechanism should be understood to the point whereby an explicit or easily read description of the knowledge held by a network can be given. This thesis presents some of the key issues which arise when attempting symbol style representations with connectionist architectures. A number of connectionist techniques are reviewed. The emphasis of this thesis is on the presentation of a model that provides a simplified version of a connectionist system that was developed to represent symbol structures. The model is the result of the research reported herin and provides an original contribution in a number of important areas. The model has the benefit of allowing very quick derivation of connectionist representatiions, unlike the slow training environments of a pure network implementation. The model provides a mathematical framework that gives insight into the convergence behaviour of the technique it proposes and this framework allows a statement to be made about generalisation characteristics. The model has immediate practical use in supplying connectionist representations with which to experiment and provides a conceptual cehicle that should assist with the development of future techniques that tackle representation issues.
6

On the diagnosability of digital systems

Russell, Jeffrey Donald, January 1973 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1973. / Vita. Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
7

Modélisation du comportement dynamique des électrobroches UGV

Gagnol, Vincent 15 September 2006 (has links) (PDF)
Les electrobroches UGV sont des éléments très influents sur la durée de vie des moyens de production car elles font partie des sous-systèmes les plus complexes et les plus sollicités d'une machine-outil. L'insufissance de maîtrise du comportement dynamique en usinage de ces systèmes induit des coûts d'exploitation et de maintenance très importants et constitue un frein à l'essor de cette technologie. Les travaux de thèse ont permis d'élaborer un modèle éléments finis de prédiction du comportement dynamique d'électrobroches en rotation. La simulation du modèle permet d'observer une forte dépendance du comportement dynamique avec la fréquence de rotation de la broche.Les propriétés du modèle sont ensuite exploitées pour prédire les conditions de stabilité d'une opération d'usinage. Les résultats numériques présentés dans la thése ont fait l'objet de validations expérimentales. Un logiciel industriel d'analyse d'electrobroches BrochePro a été élaboré
8

Run time reusability in object-oriented schematic capture

Parsons, David January 1999 (has links)
Object-orientation provides for reusability through encapsulation, inheritence and aggregation. Reusable elements such as classes and components can support extensible systems that enable the addition of new types of object. One context where extensibility can be applied is electronic circuit design, where systems can usefully enable the design and simulation of new electronic components. A useful tool in this domain is the graphical schematic capture interface that allows a circuit designer to place and connect together the symbolic representations of electronic components. The resulting circuit schematic may then be 'captured' and converted to code form for simulation or synthesis. An extensible schematic capture system for VHDL-AMS mixed mode (analogue and digital) code generation has been built using an object-oriented design based on a reflective architecture. It shields the circuit designer from complexity in the underlying hardware description language while enabling new component types to be created with the maximum reuse of existing objects. Polymorphism, a defining feature of object-oriented systems, is used to provide this flexibility and power, not only in design pattersn and code mechanisms but also as a conceptual approach. The system has been specifically designd to allow for extensibility at run time, so that new types of component can be modelled, integrated into circuit schematics and included in the code generation phase as one seamless process. The definition of new components is largely automated via a visual programming interface with only some behavioural definitions required using VHDL-AMS code. This functionality is supported by a reflective architecture that removes the need for code rebuilding, thus overcoming problems such as tool dependency, code bloat and time required for recompilation. Component objects represented this way are semantically polymorphic in their external behaviours, both visually and in their code representation, without relying on a traditional single classification hierarchy. This use of polymorphism as a conceptual approach rather than simply as an aspect of implementation provides a schematic capture interface of great flexibility and transparency.
9

High performance computing systems for signal processing

King, Graham A. January 1996 (has links)
The submission begins by demonstrating that the conditions required for consideration under the University's research degrees regulations have been met in full. There then follows a commentary which starts by explaining the origin of the research theme concerned and which continues by discussing the nature and significance of the work. This has been an extensive programme to devise new methods of improving the computational speed and efficiency required for effective implementation of FIR and IIR digital filters and transforms. The problems are analysed and initial experimental work is described which sought to quantify the performance to be derived from peripheral vector processors. For some classes of computation, especially in real time, it was necessary to tum to pure systolic array hardware engines and a large number of innovations are suggested, both in array architecture and in the creation of a new hybrid opto-electronic adder capable of improving the performance ofprocessing elements for the array. This significant and original research is extended further by including a class of computation involving a bit sliced co-processor. A means of measuring the performance of this system is developed and discussed. The contribution of the work has been evident in: software innovation for horizontal architecture microprocessors; improved multi-dimensional systolic array designs; the development of completely new implementations of processing elements in such arrays; and in the details of co-processing architectures for bit sliced microprocessors. The use of Read Only Memory in creating n-dimensional FIR or IIR filters, and in executing the discrete cosine transform is a further innovative contribution that has enabled researchers to re-examine the case for pre-calculated systems previously using stored squares. The Read Only Memory work has suggested that Read Only Memory chips may be combined in a way architecturally similar to systolic array processing elements. This led to original concepts of pipelining for memory devices. The work is entirely coherent in that it covers the application of these contributions to a set of common processes, producing a set of performance graded and scaleable solutions. In order that effective solutions are proposed it was necessary to demonstrate a solid underlying appreciation of the computational mechanics involved. Whilst the published papers within this submission assume such an understanding , two appendices are provided to demonstrate the essential groundwork necessary to underpin the work resulting in these publications. The improved results obtained from the programme were threefold: execution time; theoretical clocking speeds and circuit areas; and speed up ratios. In the case of the investigations involving vector signal processors the issue was one of quantifying the performance bounds of the architecture in performing specific combinations of signal processing functions. An important aspect of this work was the optimisation achieved in the programming of the device. The use of innovative techniques reduced the execution time for the complex combinational algorithms involved to sub 10 milliseconds. Given the real time constraints for typical applications and the aims for this research the work evolved toward dedicated hardware solutions. Systolic arrays were thus a significant area of investigation. In such systems meritorious criteria are concerned with achieving: a higher regularity in architectural structure; data exchanges only with nearest neighbour processing elements; minimised global distribution functions such as power supplies and clock lines; minimised latency; minimisation in the use of latches; the elimination of output adders; and the design of higher speed processing elements. The programme has made original and significant contributions to the art of effective array design culminating in systems calculated to clock at 100MHz when using 1 micron CMOS technology, whilst creating reductions in transistor count when compared with contemporary implementations. The improvements vary by specific design but are ofthe order of30-l00% speed advantage and 20-30% less real estate usage. The third type of result was obtained when considering operations best executed by dedicated microcode running on bit sliced engines. The main issues for this part of the work were the development of effective interactions between host processors and the bit sliced processors used for computationally intensive and repetitive functions together with the evaluation of the relative performance of new bit sliced microcode solutions. The speed up obtained relative to a range of state of the art microprocessors (68040, 80386, 32032) ranged from 2: 1 to 8: 1. The programme of research is represented by sixteen papers divided into three groups corresponding to the following stages in the work: problem definition and initial responses involving vector processors; the synthesis of higher performance solutions using dedicated hardware; and bit sliced solutions
10

The colorimetric segmentation of textured digital images

Noriega, Leonardo Antonio January 1998 (has links)
This study approaches the problem of colour image segmentation as a pattern recognition task. This leads to the problem being broken down into two component parts: feature extraction and classification algorithms. Measures to enable the objective assessment of segmentation algorithms are considered. In keeping with this pattern-recognition based philosophy, the issue of texture is approached by a consideration of features, follwed by experimentation based on classification. Techniques based on Gabor filters and fractal dimension are compared. Also colour is considered in terms of its features, and a systematic exploration of colour features in undertaken. The technique for assessing colour features is also used as the basis for a segmentation algorithm that can be used for combining colour and texture. In this study, several novel techniques are presented and discussed. Firstly a methodology for the judgement of image segmentation algorithms. Secondly a technique for segmenting images using fractal dimension is presented, including a novel application of information dimension. thirdly an objective assessment of colour spaces using the techniques discussed as the first point of this study. Finally strategies for combining colour and texture in the segmentation process are discussed and techniques presented.

Page generated in 0.1119 seconds