• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 627
  • 85
  • 81
  • 64
  • 63
  • 58
  • 57
  • 49
  • 46
  • 45
  • 41
  • 40
  • 39
  • 39
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Automatic design of analogue circuits

Sapargaliyev, Yerbol January 2011 (has links)
Evolvable Hardware (EHW) is a promising area in electronics today. Evolutionary Algorithms (EA), together with a circuit simulation tool or real hardware, automatically designs a circuit for a given problem. The circuits evolved may have unconventional designs and be less dependent on the personal knowledge of a designer. Nowadays, EA are represented by Genetic Algorithms (GA), Genetic Programming (GP) and Evolutionary Strategy (ES). While GA is definitely the most popular tool, GP has rapidly developed in recent years and is notable by its outstanding results. However, to date the use of ES for analogue circuit synthesis has been limited to a few applications. This work is devoted to exploring the potential of ES to create novel analogue designs. The narrative of the thesis starts with a framework of an ES-based system generating simple circuits, such as low pass filters. Then it continues with a step-by-step progression to increasingly sophisticated designs that require additional strength from the system. Finally, it describes the modernization of the system using novel techniques that enable the synthesis of complex multi-pin circuits that are newly evolved. It has been discovered that ES has strong power to synthesize analogue circuits. The circuits evolved in the first part of the thesis exceed similar results made previously using other techniques in a component economy, in the better functioning of the evolved circuits and in the computing power spent to reach the results. The target circuits for evolution in the second half are chosen by the author to challenge the capability of the developed system. By functioning, they do not belong to the conventional analogue domain but to applications that are usually adopted by digital circuits. To solve the design tasks, the system has been gradually developed to support the ability of evolving increasingly complex circuits. As a final result, a state-of-the-art ES-based system has been developed that possesses a novel mutation paradigm, with an ability to create, store and reuse substructures, to adapt the mutation, selection parameters and population size, utilize automatic incremental evolution and use the power of parallel computing. It has been discovered that with the ability to synthesis the most up-to-date multi-pin complex analogue circuits that have ever been automatically synthesized before, the system is capable of synthesizing circuits that are problematic for conventional design with application domains that lay beyond the conventional application domain for analogue circuits.
82

Microprocessor based step motor controller

Magotra, Neeraj January 2011 (has links)
Typescript (Photocopy) / Digitized by Kansas Correctional Industries
83

A Heuristic for the Constrained One-Sided Two-Layered Crossing Reduction Problem for Dynamic Graph Layout

Mai, Dung Hoang 01 January 2011 (has links)
Data in real-world graph drawing applications often change frequently but incrementally. Any drastic change in the graph layout could disrupt a user's "mental map." Furthermore, real-world applications like enterprise process or e-commerce graphing, where data change rapidly in both content and quantity, demand a comprehensive responsiveness when rendering the graph layout in a multi-user environment in real time. Most standard static graph drawing algorithms apply global changes and redraw the entire graph layout whenever the data change. The new layout may be very different from the previous layout and the time taken to redraw the entire graph degrades quickly as the amount of graph data grows. Dynamic behavior and the quantity of data generated by real-world applications pose challenges for existing graph drawing algorithms in terms of incremental stability and scalability. A constrained hierarchical graph drawing framework and modified Sugiyama heuristic were developed in this research. The goal of this research was to improve the scalability of the constrained graph drawing framework while preserving layout stability. The framework's use of the relational data model shifts the graph application from the traditional desktop to a collaborative and distributed environment by reusing vertex and edge information stored in a relational database. This research was based on the work of North and Woodhull (2001) and the constrained crossing reduction problem proposed by Forster (2004). The result of the constrained hierarchical graph drawing framework and the new Sugiyama heuristic, especially the modified barycenter algorithms, were tested and evaluated against the Graphviz framework and North and Woodhull's (2001) online graph drawing framework. The performance test results showed that the constrained graph drawing framework run time is comparable with the performance of the Graphviz framework in terms of generating static graph layouts, which is independent of database accesses. Decoupling graph visualization from the graph editing modules improved scalability, enabling the rendering of large graphs in real time. The visualization test also showed that the constrained framework satisfied the aesthetic criteria for constrained graph layouts. Future enhancements for this proposed framework include implementation of (1) the horizontal coordinate assignment algorithm, (2) drawing polylines for multilayer edges in the rendering module, and (3) displaying subgraphs for very large graph layouts.
84

Dynamic protein classification: Adaptive models based on incremental learning strategies

Mohamed, Shakir 18 March 2008 (has links)
Abstract One of the major problems in computational biology is the inability of existing classification models to incorporate expanding and new domain knowledge. This problem of static classification models is addressed in this thesis by the introduction of incremental learning for problems in bioinformatics. The tools which have been developed are applied to the problem of classifying proteins into a number of primary and putative families. The importance of this type of classification is of particular relevance due to its role in drug discovery programs and the benefit it lends to this process in terms of cost and time saving. As a secondary problem, multi–class classification is also addressed. The standard approach to protein family classification is based on the creation of committees of binary classifiers. This one-vs-all approach is not ideal, and the classification systems presented here consists of classifiers that are able to do all-vs-all classification. Two incremental learning techniques are presented. The first is a novel algorithm based on the fuzzy ARTMAP classifier and an evolutionary strategy. The second technique applies the incremental learning algorithm Learn++. The two systems are tested using three datasets: data from the Structural Classification of Proteins (SCOP) database, G-Protein Coupled Receptors (GPCR) database and Enzymes from the Protein Data Bank. The results show that both techniques are comparable with each other, giving classification abilities which are comparable to that of the single batch trained classifiers, with the added ability of incremental learning. Both the techniques are shown to be useful to the problem of protein family classification, but these techniques are applicable to problems outside this area, with applications in proteomics including the predictions of functions, secondary and tertiary structures, and applications in genomics such as promoter and splice site predictions and classification of gene microarrays.
85

Long-term distribution network pricing and planning to facilitate efficient power distribution

Heng, Hui Yi January 2010 (has links)
No description available.
86

Incremental Verification of Timing Constraints for Real-Time Systems

Andrei, Ştefan, Chin, Wei Ngan, Rinard, Martin C. 01 1900 (has links)
Testing constraints for real-time systems are usually verified through the satisfiability of propositional formulae. In this paper, we propose an alternative where the verification of timing constraints can be done by counting the number of truth assignments instead of boolean satisfiability. This number can also tell us how “far away” is a given specification from satisfying its safety assertion. Furthermore, specifications and safety assertions are often modified in an incremental fashion, where problematic bugs are fixed one at a time. To support this development, we propose an incremental algorithm for counting satisfiability. Our proposed incremental algorithm is optimal as no unnecessary nodes are created during each counting. This works for the class of path RTL. To illustrate this application, we show how incremental satisfiability counting can be applied to a well-known rail-road crossing example, particularly when its specification is still being refined. / Singapore-MIT Alliance (SMA)
87

Framtagande av ny höjdmätningsmetod till försvarets antennhiss 861 / The development of a new method of measuring vertical displacement of the elevator mounted search radar PS861

Åhs, Karl-Johan January 2012 (has links)
This bachelor degree project was carried out at Saab AB Service and Repair, Arboga, Sweden. The objective was to design, construct and implement a new stable and reliable method of measuring the continuous vertical displacement (height) of the military search radar PS861 mounted on a hydraulic powered elevator. The end product needs to be durable enough to be fully operational in the harsh environment of an outdoor elevator shaft and yet as accurate and precise as possible since one of its purposes is to calibrate control equipment. Previously used technique has proven not to meet any of the above mentioned properties. A prototype using a high resolution quadrature output rotary encoder has been developed, allowing a completely digital interface. This new method has been evaluated in laboratory environment where tests have been conducted regarding both reliability and validity. The tests show that the new digital system provides highly improved accuracy and precision and in addition to that, the sensor with its IP-64 classification ensures operation even in the worst conditions. The technology developed in this project is also versatile and may be used in other situations where rotational motions are to be measured. Real life tests have not yet been carried out, but future test results will determine whether the product will replace the old system or not.
88

Incremental Aspect Model Learning on Streaming¡@Documents

Wu, Cheng-Wei 16 August 2010 (has links)
Owing to the development of Internet, excessive online data drive users to apply tools to assist them in obtaining desired and useful information. Information retrieval techniques serve as one of the major assistance tools that ease users¡¦ information processing loads. However, most current IR models do not consider processing streaming information which essentially characterizes today¡¦s Web environment. The approach to re-building models based on the full knowledge of data at hand triggered by the new incoming information every time is impractical, inefficient, and costly. Instead, IR models that can be adapted to streaming information incrementally should be considered under the dynamic environment. Therefore, this research is to propose an IR related technique, the incremental aspect model (ISM), which not only uncovers latent aspects from the collected documents but also adapts the aspect model on streaming documents chronologically. There are two stages in ISM: in Stage I, we employ probabilistic latent semantic indexing (PLSI) technique to build a primary aspect model; and in Stage II, with out-of-date data removing and new data folding-in, the aspect model can be expanded using the derived spectral method if new aspects significantly exist. Three experiments are conducted accordingly to verify ISM. Results from the first two experiments show the robust performance of ISM in incremental text clustering tasks. In Experiment III, ISM performs the task of storylines tracking on the 2010 Soccer World Cup event. It illustrates ISM¡¦s incremental learning ability to discover different themes around the event at any time. The feasibility of our proposed approach in real applications is thus justified.
89

A Sliding-Window Approach to Mining Maximal Large Itemsets for Large Databases

Chang, Yuan-feng 28 July 2004 (has links)
Mining association rules, means a process of nontrivial extraction of implicit, previously and potentially useful information from data in databases. Mining maximal large itemsets is a further work of mining association rules, which aims to find the set of all subsets of large (frequent) itemsets that could be representative of all large itemsets. Previous algorithms to mining maximal large itemsets can be classified into two approaches: exhausted and shortcut. The shortcut approach could generate smaller number of candidate itemsets than the exhausted approach, resulting in better performance in terms of time and storage space. On the other hand, when updates to the transaction databases occur, one possible approach is to re-run the mining algorithm on the whole database. The other approach is incremental mining, which aims for efficient maintenance of discovered association rules without re-running the mining algorithms. However, previous algorithms for mining maximal large itemsets based on the shortcut approach can not support incremental mining for mining maximal large itemsets. While the algorithms for incremental mining, {it e.g.}, the SWF algorithm, could not efficiently support mining maximal large itemsets, since it is based on the exhausted approach. Therefore, in this thesis, we focus on the design of an algorithm which could provide good performance for both mining maximal itemsets and incremental mining. Based on some observations, for example, ``{it if an itemset is large, all its subsets must be large; therefore, those subsets need not to be examined further}", we propose a Sliding-Window approach, the SWMax algorithm, for efficiently mining maximal large itemsets and incremental mining. Our SWMax algorithm is a two-passes partition-based approach. We will find all candidate 1-itemsets ($C_1$), candidate 3-itemsets ($C_3$), large 1-itemsets ($L_1$), and large 3-itemsets ($L_3$) in the first pass. We generate the virtual maximal large itemsets after the first pass. Then, we use $L_1$ to generate $C_2$, use $L_3$ to generate $C_4$, use $C_4$ to generate $C_5$, until there is no $C_k$ generated. In the second pass, we use the virtual maximal large itemsets to prune $C_k$, and decide the maximal large itemsets. For incremental mining, we consider two cases: (1) data insertion, (2) data deletion. Both in Case 1 and Case 2, if an itemset with size equal to 1 is not large in the original database, it could not be found in the updated database based on the SWF algorithm. That is, a missing case could occur in the incremental mining process of the SWF algorithm, because the SWF algorithm only keeps the $C_2$ information. While our SWMax algorithm could support incremental mining correctly, since $C_1$ and $C_3$ are maintained in our algorithm. We generate some synthetic databases to simulate the real transaction databases in our simulation. From our simulation, the results show that our SWMax algorithm could generate fewer number of candidates and needs less time than the SWF algorithm.
90

A Novelty-based Clustering Method for On-line Documents

Khy, Sophoin, Ishikawa, Yoshiharu, Kitagawa, Hiroyuki January 2007 (has links)
No description available.

Page generated in 0.0551 seconds