• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2620
  • 941
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 6001
  • 1460
  • 891
  • 731
  • 724
  • 707
  • 495
  • 494
  • 483
  • 455
  • 422
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Optimization and Search in Model-Based Automotive SW/HW Development

Lianjie, Shen January 2014 (has links)
In this thesis two case studies are performed about solving two design problems we face during the design phase of new Volvo truck. One is to solve the frame packing problem on CAN bus. The other is to solve the LDC allocation problem. Both solutions are targeted to meet as many end-to-end latency requirements as possible. Now the solution is obtained through manually approach and based on the designer experience. But it is still not satisfactory enough. With the development of artificial intelligence method we propose two methods based on genetic algorithm to solve our design problem we face today. In first case study about frame packing we perform one single genetic algorithm process to find the optimal solution. In second case study about LDC allocation we proposed how to handle two genetic algorithm processes together to reach the optimal solution. In this thesis we show the feasibility of adopting artificial intelligence concept in some activities of the truck design phases like we do in both case studies.
162

Achieving predictable timing and fairness through cooperative polling

Sinha, Anirban 05 1900 (has links)
Time-sensitive applications that are also CPU intensive like video games, video playback, eye-candy desktops etc. are increasingly common. These applications run on commodity operating systems that are targeted at diverse hardware, and hence they cannot assume that sufficient CPU is always available. Increasingly, these applications are designed to be adaptive. When executing multiple such applications, the operating system must not only provide good timeliness but also (optionally) allow co-ordinating their adaptations so that applications can deliver uniform fidelity. In this work, we present a starvation-free, fair, process scheduling algorithm that provides predictable and low latency execution without the use of reservations and assists adaptive time sensitive tasks with achieving consistent quality through cooperation. We combine an event-driven application model called cooperative polling with a fair-share scheduler. Cooperative polling allows sharing of timing or priority information across applications via the kernel thus providing good timeliness, and the fair-share scheduler provides fairness and full utilization. Our experiments show that cooperative polling leverages the inherent efficiency advantages of voluntary context switching versus involuntary pre-emption. In CPU saturated conditions, we show that the scheduling responsiveness of cooperative polling is five times better than a well-tuned fair-share scheduler, and orders of magnitude better than the best-effort scheduler used in the mainstream Linux kernel.
163

Dažnų sekų paieškos tikimybinio algoritmo tyrimas / Investigation of a probabilistic algorithm for mining frequent sequences

Cibulskis, Žilvinas 13 June 2006 (has links)
The subject of the paper is to analyze the problem of the frequency of the subsequences in large volume sequences. A new probabilistic algorithm for mining frequent sequences (ProMFS) is proposed. In the abstract of this paper we get know information about the main concepts of the analysis this problem. We got acquainted with the Market Basket Data example which main idea is to find most frequent set of the customer’s selected items. There were also presented several algorithms, which analyze the problems of finding the frequent sequences. The most popular algorithms are: Apriori – based on the property: “Any subset of a large item set must be large”, Eclat – the main feature is dynamically process each transaction online maintaining 2-itemset counts, GSP algorithm – which can be used to identify surely not frequent sets. According to the results of these algorithms the new probabilistic algorithm for mining frequent sequences was implemented. The new algorithm is based on the estimation of the statistical characteristic of the main sequence. According to these characteristics we generated much shorter model sequence that is analyzed with the GSP algorithm. The subsequence frequency in the main sequence is estimated by the results of the GSP algorithm applied on the new sequence. The new probabilistic algorithm implemented in the practical part we tested in some experiments. There were two programs used – the first one was created in Pascal language, the second in Delphi... [to full text]
164

Evolutionary algorithms in artificial intelligence : a comparative study through applications

Nettleton, David John January 1994 (has links)
For many years research in artificial intelligence followed a symbolic paradigm which required a level of knowledge described in terms of rules. More recently subsymbolic approaches have been adopted as a suitable means for studying many problems. There are many search mechanisms which can be used to manipulate subsymbolic components, and in recent years general search methods based on models of natural evolution have become increasingly popular. This thesis examines a hybrid symbolic/subsymbolic approach and the application of evolutionary algorithms to a problem from each of the fields of shape representation (finding an iterated function system for an arbitrary shape), natural language dialogue (tuning parameters so that a particular behaviour can be achieved) and speech recognition (selecting the penalties used by a dynamic programming algorithm in creating a word lattice). These problems were selected on the basis that each should have a fundamentally different interactions at the subsymbolic level. Results demonstrate that for the experiments conducted the evolutionary algorithms performed well in most cases. However, the type of subsymbolic interaction that may occur influences the relative performance of evolutionary algorithms which emphasise either top-down (evolutionary programming - EP) or bottom-up (genetic algorithm - GA) means of solution discovery. For the shape representation problem EP is seen to perform significantly better than a GA, and reasons for this disparity are discussed. Furthermore, EP appears to offer a powerful means of finding solutions to this problem, and so the background and details of the problem are discussed at length. Some novel constraints on the problem's search space are also presented which could be used in related work. For the dialogue and speech recognition problems a GA and EP produce good results with EP performing slightly better. Results achieved with EP have been used to improve the performance of a speech recognition system.
165

Fusing Loopless Algorithms for Combinatorial Generation

Violich, Stephen Scott January 2006 (has links)
Loopless algorithms are an interesting challenge in the field of combinatorial generation. These algorithms must generate each combinatorial object from its predecessor in no more than a constant number of instructions, thus achieving theoretically minimal time complexity. This constraint rules out powerful programming techniques such as iteration and recursion, which makes loopless algorithms harder to develop and less intuitive than other algorithms. This thesis discusses a divide-and-conquer approach by which loopless algorithms can be developed more easily and intuitively: fusing loopless algorithms. If a combinatorial generation problem can be divided into subproblems, it may be possible to conquer it looplessly by fusing loopless algorithms for its subproblems. A key advantage of this approach is that is allows existing loopless algorithms to be reused. This approach is not novel, but it has not been generalised before. This thesis presents a general framework for fusing loopless algorithms, and discusses its implications. It then applies this approach to two combinatorial generation problems and presents two new loopless algorithms. The first new algorithm, MIXPAR, looplessly generates well-formed parenthesis strings comprising two types of parentheses. It is the first loopless algorithm for generating these objects. The second new algorithm, MULTPERM, generates multiset permutations in linear space using only arrays, a benchmark recently set by Korsh and LaFollette (2004). Algorithm MULTPERM is evaluated against Korsh and LaFollette's algorithm, and shown to be simpler and more efficient in both space and time.
166

Research and simulation on speech recognition by Matlab

Pan, Linlin January 2014 (has links)
With the development of multimedia technology, speech recognition technology has increasingly become a hotspot of research in recent years. It has a wide range of applications, which deals with recognizing the identity of the speakers that can be classified into speech identification and speech verification according to decision modes.The main work of this thesis is to study and research the techniques, algorithms of speech recognition, thus to create a feasible system to simulate the speech recognition. The research work and achievements are as following: First: The author has done a lot of investigation in the field of speech recognition with the adequate research and study. There are many algorithms about speech recognition, to sum up, the algorithms can divided into two categories, one of them is the direct speech recognition, which means the method can recognize the words directly, and another prefer the second method that recognition based on the training model. Second: find a useable and reasonable algorithm and make research about this algorithm. Besides, the author has studied algorithms, which are used to extract the word's characteristic parameters based on MFCC(Mel frequency Cepstrum Coefficients) , and training the Characteristic parameters based on the GMM(Gaussian mixture mode) . Third: The author has used the MATLAB software and written a program to implement the speech recognition algorithm and also used the speech process toolbox in this program. Generally speaking, whole system includes the module of the signal process, MFCC characteristic parameter and GMM training. Forth: Simulation and analysis the results. The MATLAB system will read the wav file, play it first, and then calculate the characteristic parameters automatically. All content of the speech signal have been distinguished in the last step. In this paper, the author has recorded speech from different people to test the systems and the simulation results shown that when the testing environment is quiet enough and the speaker is the same person to record for 20 times, the performance of the algorithm is approach to 100% for pair of words in different and same syllable. But the result will be influenced when the testing signal is surrounded with certain noise level. The simulation system won’t work with a good output, when the speaker is not the same one for recording both reference and testing signal.
167

Prospects for applying speaker verification to unattended secure banking

Hannah, Malcolm Ian January 1996 (has links)
No description available.
168

On the synthesis of integral and dynamic recurrences

Rapanotti, Lucia January 1996 (has links)
Synthesis techniques for regular arrays provide a disciplined and well-founded approach to the design of classes of parallel algorithms. The design process is guided by a methodology which is based upon a formal notation and transformations. The mathematical model underlying synthesis techniques is that of affine Euclidean geometry with embedded lattice spaces. Because of this model, computationally powerful methods are provided as an effective way of engineering regular arrays. However, at present the applicability of such methods is limited to so-called affine problems. The work presented in this thesis aims at widening the applicability of standard synthesis methods to more general classes of problems. The major contributions of this thesis are the characterisation of classes of integral and dynamic problems, and the provision of techniques for their systematic treatment within the framework of established synthesis methods. The basic idea is the transformation of the initial algorithm specification into a specification with data dependencies of increased regularity, so that corresponding regular arrays can be obtained by a direct application of the standard mapping techniques. We will complement the formal development of the techniques with the illustration of a number of case studies from the literature.
169

High Speed Vlsi Implementation Of The Rijndael Encryption Algorithm

Sever, Refik 01 January 2003 (has links) (PDF)
This thesis study presents a high speed VLSI implementation of the Rijndael Encryption Algorithm, which is selected to be the new Advanced Encryption Standard (AES) Algorithm. Both the encryption and the decryption algorithms of Rijndael are implemented as a single ASIC. Although data size is fixed to 128 bits in the AES, our implementation supports all the data sizes of the original Rijndael Algorithm. The core is optimised for both area and speed. Using 149K gates in a 0.35-&micro / m standard CMOS process, 132 MHz worst-case clock speed is achieved yielding 2.41 Gbit/s non-pipelined throughput in both encryption and decryption. iii The design has a latency of 30 clock periods for key expansion that takes 228 ns for this implementation. A single encryption or decryption of a data block requires at most 44 clock periods. The area of the chip is 12.8 mm2 including the pads. 0.35-&micro / m Standard Cell Libraries of the AMI Semiconductor Company are used in the implementation. The literature survey revealed that this implementation is the fastest published non-pipelined implementation for both encryption and decryption algorithms.
170

Parallel and Distributed Multi-Algorithm Circuit Simulation

Dai, Ruicheng 2012 August 1900 (has links)
With the proliferation of parallel computing, parallel computer-aided design (CAD) has received significant research interests. Transient transistor-level circuit simulation plays an important role in digital/analog circuit design and verification. Increased VLSI design complexity has made circuit simulation an ever growing bottleneck, making parallel processing an appealing solution for addressing this challenge. In this thesis, we propose and develop a parallel and distributed multi-algorithm approach to leverage the power of multi-core computer clusters for speeding up transistor-level circuit simulation. The targeted multi-algorithm approach provides a natural paradigm for exploiting parallelism for circuit simulation. Parallel circuit simulation is facilitated through the exploration of algorithm diversity where multiple simulation algorithms collaboratively work on a single simulation task. To utilize computer clusters comprising of multi-core processors, each algorithm is executed on a separate node with sufficient system resource such as processing power, memory and I/O bandwidth. We propose two communication schemes, namely master-slave and peer-to-peer schemes, to allow for inter-algorithm communication. Compared with the shared-memory based multi-algorithm implementation, the proposed simulation approach alleviates cache/memory contention as a result of multi-algorithm execution and provides further runtime speedups.

Page generated in 0.0655 seconds