• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 901
  • 650
  • 5
  • 2
  • Tagged with
  • 1560
  • 1560
  • 85
  • 75
  • 69
  • 68
  • 59
  • 58
  • 57
  • 55
  • 54
  • 53
  • 53
  • 53
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Temperature Aware and Defect-Probability Driven Test Scheduling for System-on-Chip

He, Zhiyuan January 2010 (has links)
The high complexity of modern electronic systems has resulted in a substantial increase in the time-to-market as well as in the cost of design, production, and testing. Recently, in order to reduce the design cost, many electronic systems have employed a core-based system-on-chip (SoC) implementation technique, which integrates pre-defined and pre-verified intellectual property cores into a single silicon die. Accordingly, the testing of manufactured SoCs adopts a modular approach in which test patterns are generated for individual cores and are applied to the corresponding cores separately. Among many techniques that reduce the cost of modular SoC testing, test scheduling is widely adopted to reduce the test application time. This thesis addresses the problem of minimizing the test application time for modular SoC tests with considerations on three critical issues: high testing temperature, temperature-dependent failures, and defect probabilities. High temperature occurs in testing modern SoCs and it may cause damages to the cores under test. We address the temperature-aware test scheduling problem aiming to minimize the test application time and to avoid the temperature of the cores under test exceeding a certain limit. We have developed a test set partitioning and interleaving technique and a set of test scheduling algorithms to solve the addressed problem. Complicated temperature dependences and defect-induced parametric failures are more and more visible in SoCs manufactured with nanometer technology. In order to detect the temperature-dependent defects, a chip should be tested at different temperature levels. We address the SoC multi-temperature testing issue where tests are applied to a core only when the temperature of that core is within a given temperature interval. We have developed test scheduling algorithms for multi-temperature testing of SoCs. Volume production tests often employ an abort-on-first-fail (AOFF) approach which terminates the chip test as soon as the first fault is detected. Defect probabilities of individual cores in SoCs can be used to compute the expected test application time of modular SoC tests using the AOFF approach. We address the defect-probability driven SoC test scheduling problem aiming to minimize the expected test application time with a power constraint. We have proposed techniques which utilize the defect probability to generate efficient test schedules. Extensive experiments based on benchmark designs have been performed to demonstrate the efficiency and applicability of the developed techniques.
142

Driver Modeling based on computational intelligence approaches : exploaration and Modeling driver-following data collected by an instrumented vehicle

Ma, Xiaoliang January 2006 (has links)
This thesis is concerned with modeling of driver behavior based on data collected from real traffic using an advanced instrumented vehicle. In particular, the focus is on driver-following behavior (often called car-following in transport science) for microscopic simulation of road traffic systems. In addition, the modeling methodology developed can be applied for the design of human-centered control algorithms in adaptive cruise control (ACC) and other longitudinal active-safety technologies. Driver behavior is a constant research topic in the modeling of traffic systems and Intelligent Transportation Systems (ITS), which could be traced back to the work of GeneralMotor (GM) Co. in 1950’s. In the early time, researchers were only interested in the development of driver models fulfilling basic physical properties and producing reasonable flow dynamics on a macroscopic level. With the booming interest on driver modeling on a microscopic level and needs in ITS developments, researchers now emphasize modeling using microscopic data acquired from real world. To follow this research trend, a methodological framework on car-following data acquisition, analysis and modeling has been developed step by step in this thesis, and the basic idea is to build a computational model for car-following behavior by exploration of collected data. To carry out the work, different techniques within the field of modern Artificial Intelligence (AI), namely Computational Intelligence (CI)1, have been applied in the research subtasks e.g. information estimation, behavioral regime classification, regime model integration and model estimation. Therefore, a preliminary introduction of the CI methods being used in this thesis work is included in the text. / QC 20100602
143

Hierarchical curvature estimation in computer vision

Bårman, Håkan January 1991 (has links)
This thesis concerns the estimation and description of curvature for computer vision applications. Different types of multi-dimensional data are considered: images (2D); volumes (3D); time sequences of images (3D); and time sequences of volumes (4D). The methods are based on local Fourier domain models and use local operations such as filtering. A hierarchical approach is used. Firstly, the local orientation is estimated and represented with a vector field equivalent description. Secondly, the local curvature is estimated from the orientation description. The curvature algorithms are closely related to the orientation estimation algorithms and the methods as a whole give a unified approach to the estimation and description of orientation and curvature. In addition, the methodology avoids thresholding and premature decision making. Results on both synthetic and real world data are presented to illustrate the algorithms performance with respect to accuracy and noise insensitivity. Examples illustrating the use of the curvature estimates for tasks such as image enhancement are also included.
144

Local symmetry features in image processing

Bigün, Josef January 1988 (has links)
The extraction of features is necessary for all aspects of image processing and analysis such as classification, segmentation, enhancement and coding. In the course of developing models to describe images, a need arises for description of more complex structures than lines. This need does not reject the importance of line structures but indicates the need to complement and utilize them in a more systematic way. In this thesis, some new methods for extraction of local symmetry features as well as experimental results and applications are presented. The local images are expanded in terms of orthogonal functions with iso-value curves being harmonic functions. Circular, linear, hyperbolic and parabolic structures are studied in particular and some two-step algorithms involving only convolutions are given for detection purposes. Confidence measures with a reliability verified by both theoretical and experimental studies, are proposed. The method is extended to symmetric patterns fulfilling certain general conditions. It is shown that in the general case the resulting algorithms are implementable through the same computing schemes used for detection of linear structures except for a use of different filters. Multidimensional linear symmetry is studied and an application problem in 3-D or in particular, optical flow, and the solution proposed by this general framework is presented. The solution results in a closed form algorithm consisting of two steps, in which spatio-temporal gradient and Gaussian filtering are performed. The result consists of an optical flow estimate minimizing the linear symmetry criterion and a confidence measure based on the minimum error. The frequency band sensitivity of the obtained results is found to be possible to control. Experimental results are presented.
145

Adaptive Real-time Monitoring for Large-scale Networked Systems

Gonzalez Prieto, Alberto January 2008 (has links)
Large-scale networked systems, such as the Internet and server clusters, are omnipresent today. They increasingly deliver services that are critical to both businesses and the society at large, and therefore their continuous and correct operation must be guaranteed. Achieving this requires the realization of adaptive management systems, which continuously reconfigure such large-scale dynamic systems, in order to maintain their state near a desired operating point, despite changes in the networking conditions.The focus of this thesis is continuous real-time monitoring, which is essential for the realization of adaptive management systems in large-scale dynamic environments. Real-time monitoring provides the necessary input to the decision-making process of network management, enabling management systems to perform self-configuration and self-healing tasks.We have developed, implemented, and evaluated a design for real-time continuous monitoring of global metrics with performance objectives, such as monitoring overhead and estimation accuracy. Global metrics describe the state of the system as a whole, in contrast to local metrics, such as device counters or local protocol states, which capture the state of a local entity. Global metrics are computed from local metrics using aggregation functions, such as SUM, AVERAGE and MAX.Our approach is based on in-network aggregation, where global metrics are incrementally computed using spanning trees. Performance objectives are achieved through filtering updates to local metrics that are sent along that tree. A key part in the design is a model for the distributed monitoring process that relates performance metrics to parameters that tune the behavior of a monitoring protocol. The model allows us to describe the behavior of individual nodes in the spanning tree in their steady state. The model has been instrumental in designing a monitoring protocol that is controllable and achieves given performance objectives.We have evaluated our protocol, called A-GAP, experimentally, through simulation and testbed implementation. It has proved to be effective in meeting performance objectives, efficient, adaptive to changes in the networking conditions, controllable along different performance dimensions, and scalable. We have implemented a prototype on a testbed of commercial routers. The testbed measurements are consistent with simulation studies we performed for different topologies and network sizes. This proves the feasibility of the design, and, more generally, the feasibility of effective and efficient real-time monitoring in large network environments. / QC 20100727
146

Material Hygiene : An EcoDesign mindset for recycling of products

Johansson, Jan January 2008 (has links)
In recent years the end-of-life phase has come into focus. European Union directives have been issued regulating certain product groups and producer responsibility. Vehicles and electronic products are the first to be identified and targeted. EU environmental legislation acts as a driver for increased reuse, recycling and recovery. The overall aim of the presented activities has been to increase the effectiveness of current recycling practices, both in terms of design changes and end-of-life treatment process suggestions. A “pre-step” operation has been suggested, in order to either salvage valuable (or toxic) material or to remove diluting bulk material. As this thesis is focused on the recycling of white-goods specifically dishwashers the suggested prestep would be removal of valuable copper prior to shredding. A life cycle assessment (LCA) study has been conducted. The purpose of this study was to determine if using a pre-step is beneficial from an environmental point of view or not. Furthermore, an experiment on the usability of recycled polymers from waste electrical and electronic equipment (WEEE) has been performed. Based on this work polymer recycling process suggestions are presented. Based on research in the fields of design for recycling, design for disassembly and EcoDesign the material hygiene (MH) concept of design for recycling is formulated. This concept is tested on a disassembly field study carried out at a waste collection facility and a polymer recycling experiment at a refrigerator fragmentation plant. Five MH factors are suggested: MH Mix, MH Identification, MH Resources, and MH Weight and MH Map. Additionally, a MH mind-set is presented. / QC 20100816
147

Theoretical Investigations of Compressed Materials

Luo, Wei January 2010 (has links)
The use of high pressure as a tool to design new materials as well as to investigatematerials properties has become increasingly important during last one decade. The maingoal of the present thesis is to enhance the significance of the high pressure method as aquantitative tool in solid state investigations. Virtually all of the properties of solids aredirectly determined by their electronic structure. Similarly, the changes in the propertiesof solids under pressure are determined by the changes in the electronic structure underpressure. We have attempted to provide a comprehensive description of the resulting theoryin a electronic structure and the properties of condensed matter. The theoretical basis for these investigations is the density functional theory, in combinationwith ab initio method. The study of pressure induced phase transitions for thecompounds of CaF2, Cr2GeC, Ti3SiC2, as well as V at 0 K are presented. The latticeparameters, the phase transition pressures, the equation of states, the electronic structureshave been calculated and shown a good agreement with experimental results. A lattices dynamic study of the body center cubic (bcc) Fe under high pressure andhigh temperature is presented. The bcc iron could dynamical stabilize in the Earth innercore conditions. The unusual phase transition of bcc V under high pressure is investigatedand it is shown that the driving mechanism is electron-phonon interaction. Finally, a method based on the LDA+U approach has been applied to study spin statetransition in FeCO3. Our results show that magnetic entropy play a significant role in spinstate transition. / QC 20100920
148

Contributions to Modelling and Visualisation of Multibody Systems Simulations with Detailed Contact Analysis

Siemers, Alexander January 2010 (has links)
The steadily increasing performance of modern computer systems is having a large influence on simulation technologies. It enables increasingly detailed simulations of larger and more comprehensive simulation models. Increasingly large amounts of numerical data are produced by these simulations. This thesis presents several contributions in the field of mechanical system simulation and visualisation. The work described in the thesis is of practical relevance and results have been tested and implemented in tools that are used daily in the industry i.e., the BEAST (BEAring Simulation Tool) tool box. BEAST is a multibody system (MBS) simulation software with special focus on detailed contact calculations. Our work is primarily focusing on these types of systems. focusing on these types of systems. Research in the field of simulation modelling typically focuses on one or several specific topics around the modelling and simulation work process. The work presented here is novel in the sense that it provides a complete analysis and tool chain for the whole work process for simulation modelling and analysis of multibody systems with detailed contact models. The focus is on detecting and dealing with possible problems and bottlenecks in the work process, with respect to multibody systems with detailed contact models. The following primary research questions have been formulated: How to utilise object-oriented techniques for modelling of multibody systems with special reference tocontact modelling? How to integrate visualisation with the modelling and simulation process of multibody systems withdetailed contacts. How to reuse and combine existing simulation models to simulate large mechanical systems consistingof several sub-systems by means of co-simulation modelling? Unique in this work is the focus on detailed contact models. Most modelling approaches for multibody systems focus on modelling of bodies and boundary conditions of such bodies, e.g., springs, dampers, and possibly simple contacts. Here an object oriented modelling approach for multibody simulation and modelling is presented that, in comparison to common approaches, puts emphasis on integrated contact modelling and visualisation. The visualisation techniques are commonly used to verify the system model visually and to analyse simulation results. Data visualisation covers a broad spectrum within research and development. The focus is often on detailed solutions covering a fraction of the whole visualisation process. The novel visualisation aspect of the work presented here is that it presents techniques covering the entire visualisation process integrated with modeling and simulation. This includes a novel data structure for efficient storage and visualisation of multidimensional transient surface related data from detailed contact calculations. Different mechanical system simulation models typically focus on different parts (sub-systems) of a system. To fully understand a complete mechanical system it is often necessary to investigate several or all parts simultaneously. One solution for a more complete system analysis is to couple different simulation models into one coherent simulation. Part of this work is concerned with such co-simulation modelling. Co-simulation modelling typically focuses on data handling, connection modelling, and numerical stability. This work puts all emphasis on ease of use, i.e., making mechanical system co-simulation modelling applicable for a larger group of people. A novel meta-model based approach for mechanical system co-simulation modelling is presented. The meta-modelling process has been defined and tools and techniques been created to fully support the complete process. A component integrator and modelling environment are presented that support automated interface detection, interface alignment with automated three-dimensional coordinate translations, and three dimensional visual co-simulation modelling. The integrated simulator is based on a general framework for mechanical system co-simulations that guarantees numerical stability.
149

Probabilistic Fault Diagnosis with Automotive Applications

Pernestål, Anna January 2009 (has links)
The aim of this thesis is to contribute to improved diagnosis of automotive vehicles. The work is driven by case studies, where problems and challenges are identified. To solve these problems, theoretically sound and general methods are developed. The methods are then applied to the real world systems. To fulfill performance requirements automotive vehicles are becoming increasingly complex products. This makes them more difficult to diagnose. At the same time, the requirements on the diagnosis itself are steadily increasing. Environmental legislation requires that smaller deviations from specified operation must be detected earlier. More accurate diagnostic methods can be used to reduce maintenance costs and increase uptime. Improved diagnosis can also reduce safety risks related to vehicle operation. Fault diagnosis is the task of identifying possible faults given current observations from the systems. To do this, the internal relations between observations and faults must be identified. In complex systems, such as automotive vehicles, finding these relations is a most challenging problem due to several sources of uncertainty. Observations from the system are often hidden in considerable levels of noise. The systems are complicated to model both since they are complex and since they are operated in continuously changing surroundings. Furthermore, since faults typically are rare, and sometimes never described, it is often difficult to get hold of enough data to learn the relations from. Due to the several sources of uncertainty in fault diagnosis of automotive systems, a probabilistic approach is used, both to find the internal relations, and to identify the faults possibly present in the system given the current observations. To do this successfully, all available information is integrated in the computations. Both on-board and off-board diagnosis are considered. The two tasks may seem different in nature: on-board diagnosis is performed without human integration, while the off-board diagnosis is mainly based on the interactivity with a mechanic. On the other hand, both tasks regard the same vehicle, and information from the on-board diagnosis system may be useful also for off-board diagnosis. The probabilistic methods are general, and it is natural to consider both tasks. The thesis contributes in three main areas. First, in Paper 1 and 2, methods are developed for combining training data and expert knowledge of different kinds to compute probabilities for faults. These methods are primarily developed with on-board diagnosis in mind, but are also applicable to off-board diagnosis. The methods are general, and can be used not only in diagnosis of technical system, but also in many other applications, including medical diagnosis and econometrics, where both data and expert knowledge are present. The second area concerns inference in off-board diagnosis and troubleshooting, and the contribution consists in the methods developed in Paper 3 and 4. The methods handle probability computations in systems subject to external interventions, and in particular systems that include both instantaneous and non-instantaneous dependencies. They are based on the theory of Bayesian networks, and include event-driven non-stationary dynamic Bayesian networks (nsDBN) and an efficient inference algorithm for troubleshooting based on static Bayesian networks. The framework of nsDBN event-driven nsDBN is applicable to all kinds of problems concerning inference under external interventions. The third contribution area is Bayesian learning from data in the diagnosis application. The contribution is the comparison and evaluation of five Bayesian methods for learning in fault diagnosis in Paper 5. The special challenges in diagnosis related to learning from data are considered. It is shown how the five methods should be tailored to be applicable to fault diagnosis problems. To summarize, the five papers in the thesis have shown how several challenges in automotive diagnosis can be handled by using probabilistic methods. Handling such challenges with probabilistic methods has a great potential. The probabilistic methods provide a framework for utilizing all information available, also if it is in different forms and. The probabilities computed can be combined with decision theoretic methods to determine the appropriate action after the discovery of reduced system functionality due to faults.
150

Stimuli Generation Techniques for On-Chip Mixed-Signal Test

Ahmad, Shakeel January 2010 (has links)
With increased complexity of the contemporary very large integrated circuits the need for onchip test addressing not only the digital but also analog and mixed-signal RF blocks has emerged. The standard production test has become more costly and the instrumentation is pushed to its limits by the leading edge integrated circuit technologies. Also the chip performance for high frequency operation and the area overhead appear a hindrance in terms of the test access points needed for the instrumentation-based test. To overcome these problems, test implemented on a chip can be used by sharing the available resources such as digital signal processing (DSP) and A/D, D/A converters to constitute a built-in-self-test. In this case, the DSP can serve both as a stimuli generator and response analyzer. Arbitrary test signals can be achieved using DSP. Specifically, the ΣΔ modulation technique implemented in software is useful to encode a single- or two-tone stimulus as a onebit sequence to generate a spectrally pure signal with a high dynamic range. The sequence can be stored in a cyclic memory on a chip and applied to the circuit under test using a buffer and a simple reconstruction filter. In this way ADC dynamic test for harmonic and intermodulation distortion is carried out in a simple setup. The FFT artifacts are avoided by careful frequency planning for low-pass and band-pass ΣΔ encoding technique. A noise shaping based on a combination of low- and band-pass ΣΔ modulation is also useful providing a high dynamic range for measurements at high frequencies that is a new approach. However, a possible asymmetry between rise and fall time due to CMOS process variations in the driving buffer results in nonlinear distortion and increased noise at low frequencies. A simple iterative predistortion technique is used to reduce the low frequency distortion components by making use of an on-chip DC calibrated ADC that is another contribution of the author. Some tests, however, like the two-tone RF test that targets linearity performance of a radio receiver, require test stimuli based on a dedicated hardware. For the measurement of the thirdor second-intercept point (IP3/IP2) a spectrally clean stimulus is essential. Specifically, the second- or third-order harmonic or intermodulation products of the stimulus generator should be avoided as they can obscure the test measurement. A challenge in this design is the phase noise performance and spurious tones of the oscillators, and also the distortion-free addition of the two tones. The mutual pulling effect can be minimized by layout isolation techniques. A new two-tone RF generator based on a specialized phase-locked loop (PLL) architecture is presented as a viable solution for IP3/IP2 on-chip test. The PLL provides control over the frequency spacing of two voltage controlled oscillators. For the two-tone stimulus a highly linear analog  adder is designed to limit distortion which could obscure the IP3 test. A specialized feedback circuit in the PLL is proposed to overcome interference by the reference spurs. The circuit is designed using 65 nm CMOS process. By using a fine spectral resolution the observed noise floor can be reduced to enable the measurement of second- or third-order intermodulation product tones. This also reflects a tradeoff between the test time and the test performance. While the test time to collect the required number of samples can be of milliseconds the number of samples need not be excessive, since the measurements are carried out at the receiver baseband, where the required sampling frequency is relatively low.

Page generated in 0.1205 seconds