• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Utilization of Metaheuristic Methods in the Holistic Optimization of Municipal Right of Way Infrastructure Management

January 2012 (has links)
abstract: This dissertation presents a portable methodology for holistic planning and optimization of right of way infrastructure rehabilitation that was designed to generate monetary savings when compared to planning that only considers single infrastructure components. Holistic right of way infrastructure planning requires simultaneous consideration of the three right of way infrastructure components that are typically owned and operated under the same municipal umbrella: roads, sewer, and water. The traditional paradigm for the planning of right way asset management involves operating in silos where there is little collaboration amongst different utility departments in the planning of maintenance, rehabilitation, and renewal projects. By collaborating across utilities during the planning phase, savings can be achieved when collocated rehabilitation projects from different right of way infrastructure components are synchronized to occur at the same time. These savings are in the form of shared overhead and mobilization costs, and roadway projects providing open space for subsurface utilities. Individual component models and a holistic model that utilize evolutionary algorithms to optimize five year maintenance, rehabilitation, and renewal plans for the road, sewer, and water components were created and compared. The models were designed to be portable so that they could be used with any infrastructure condition rating, deterioration modeling, and criticality assessment systems that might already be in place with a municipality. The models attempt to minimize the overall component score, which is a function of the criticality and condition of the segments within each network, by prescribing asset management activities to different segments within a component network while subject to a constraining budget. The individual models were designed to represent the traditional decision making paradigm and were compared to the holistic model. In testing at three different budget levels, the holistic model outperformed the individual models in the ability to generate five year plans that optimized prescribed maintenance, rehabilitation and renewal for various segments in order to achieve the goal of improving the component score. The methodology also achieved the goal of being portable, in that it is compatible with any condition rating, deterioration, and criticality system. / Dissertation/Thesis / Ph.D. Construction 2012
862

Triangle counting and listing in directed and undirected graphs using single machines

Santoso, Yudi 14 August 2018 (has links)
Triangle enumeration is an important element in graph analysis, and because of this it is a topic that has been studied extensively. Although the formulation is simple, for large networks the computation becomes challenging as we have to deal with memory limitation and efficiency. Many algorithms have been proposed to overcome these problems. Some use distributed computing, where the computation is distributed among many machines in a cluster. However, this approach has a high cost in terms of hardware resources and energy. In this thesis we studied triangle counting/listing algorithms for both directed and undirected graphs, and searched for methods to do the computation on a single machine. Through detailed analysis, we found some ways to improve the efficiency of the computation. Programs that implement the algorithms were built and tested on large networks with up to almost a billion nodes. The results were then analysed and discussed. / Graduate
863

Synchronization of Distributed Units without Access to GPS

Carlsson, Erik January 2018 (has links)
Time synchronization between systems having no external reference can be an issue in small wireless node-based systems. In this thesis a transceiver is designed and implemented in two separate systems. Then the timing algorithm of "TwoWay Time Transfer" is then chosen to correct any timing error between the two free running clocks of the systems. In conclusion the results are compared towards having both systems get their timing based on GPS timing. / Tidssynkronisering mellan två system som saknar externa referenser kan bli ett problem i små nodbaserade system. I det här arbetet har en sändtagare designats och implementerats i två system. Sedan valdes algoritmen "TwoWay Time Transfer"för att rätta till de timing fel som uppstår mellan systemens separata klockor.I sammanfattningen så jämnförs uppkommna resultat med att ha systemens tid från GPS.
864

[en] RESTRICTED SEARCH DECODING OF SEVERE FILTERED CONTINUOUS PHASE MODULATION / [pt] DECODIFICAÇÃO COM BUSCA RESTRITA DE CPM COM FILTRAGEM SEVERA

CARLOS ALBERTO FERREIRA SANTIAGO 09 November 2006 (has links)
[pt] No presente trabalho é examinada a decodificação de esquemas CPM (Continuous Phase Modulation) filtrados severamente (filtragem com resposta impulsional infinita) utilizando-se algoritmo M, um algoritmo de busca limitada. A estrutura formada pelo modulador CPM seguido do filtro é tratada como um esquema que realiza modulação codificada com utilização eficiente de faixa. A caracterização da modulação através de estados requer um número infinito de estados. O esquema é analisado através de simulação de sistema modulado em tempo discreto com processamento digital. A utilização de filtro digital permite uma caracterização de estados simplificada em relação a trabalhos anteriores. Uma versão simplificada do algoritmo M é analisada neste trabalho. Através de simulação realiza-se a análise do desempenho do sistema assim como do comportamento do algoritmo M na sua versão simplificada. / [en] Decoding of severely filtered CPM schemes with infinite impulse response filters using a limited search algorith (M-Algorithm ) is examined in this thesis. The structure composed by the CPM modulator followed by the filter is treated as a bandwidth efficient coded modulation scheme. The modulation requires a infinite state description. Analysis of the system is done by computer simulation. The analysed system is discret time modeled and uses digital signal processing techniques. This allows a simplified state description of the modulation scheme. A simplified version of the M-algorithm is analysed in the thesis. Analysis of the system performance as well as the M-algorithm (simplified version) behavior is done by simulation.
865

[en] LOSSY LEMPEL-ZIV ALGORITHM AND ITS APPLICATION TO IMAGE COMPRESSION / [pt] ALGORITMO DE LEMPEL-ZIV COM PERDAS E APLICAÇÃO À COMPRESSÃO DE IMAGENS

MURILO BRESCIANI DE CARVALHO 17 August 2006 (has links)
[pt] Neste trabalho, um método de compressão de dados com perdas, baseado no algoritmo de compressão sem perdas de Lempel-Ziv é proposto. Simulações são usadas para caracterizar o desempenho do método, chamado LLZ. É também aplicado à compressão de imagens e os resultados obtidos são analisados. / [en] In this work, a lossy data compression method, base don the Lempel-Ziv lossles compression scheme is proposed. Simulations are used to study the performance of the method, called LLZ. The lLZ is also used to compress digital image data and the results obtained is analized.
866

[en] A UNIVERSAL ENCODEN FOR CONTINUOUS ALPHABET SOURCE COMPRESSION / [pt] UM ALGORITMO UNIVERSAL PARA COMPRESSÃO DE FONTES COM ALFABETO CONTÍNUO

MARCELO DE ALCANTARA LEISTER 04 September 2006 (has links)
[pt] A tese de mestrado, aqui resumida, tem a meta de propor novos algoritmos para a compressão de dados, em especial imagens, apresentando aplicações e resultados teóricos. Como o título sugere, estes dados serão originados de fontes com alfabeto contínuo, podendo ser particularizado para os casos discretos. O algoritmo a ser proposto (LLZ para o caso contínuo) é baseado no codificador universal Lempel-Ziv, apresentando a característica de admitir a introdução de perdas, mas que conduza a um maior aproveitamento do poder de compressão deste algoritmo. Desta forma, o LLZ se mostra vantajoso em dois pontos: integrar compactador e quantizador, e ser um quantizador universal. / [en] This dissertation introduces new data compression algorithms, specially for images, and presents some applications and theoretical results related to these algorithms. The data to be compressed will be originated from sources with continuos alphabet, and can be stated for discrete sources. The proposed algorithms (LLZ for continuos case), which is based on the universal Lempel-Ziv coder (LZ), accepts losses, taking advantage of LZ s compression power. As such, the LIZ is an innovating proposal in two ways: first, it couples compaction and quantization in one step; and second, it can be seeing as an universal quantizer.
867

Application of improved particle swarm optimization in economic dispatch of power systems

Gninkeu Tchapda, Ghislain Yanick 06 1900 (has links)
Economic dispatch is an important optimization challenge in power systems. It helps to find the optimal output power of a number of generating units that satisfy the system load demand at the cheapest cost, considering equality and inequality constraints. Many nature inspired algorithms have been broadly applied to tackle it such as particle swarm optimization. In this dissertation, two improved particle swarm optimization techniques are proposed to solve economic dispatch problems. The first is a hybrid technique with Bat algorithm. Particle swarm optimization as the main optimizer integrates bat algorithm in order to boost its velocity and to adjust the improved solution. The second proposed approach is based on Cuckoo operations. Cuckoo search algorithm is a robust and powerful technique to solve optimization problems. The study investigates the effect of levy flight and random search operation in Cuckoo search in order to ameliorate the performance of the particle swarm optimization algorithm. The two improved particle swarm algorithms are firstly tested on a range of 10 standard benchmark functions and then applied to five different cases of economic dispatch problems comprising 6, 13, 15, 40 and 140 generating units. / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)
868

Algoritmy pro výpočet Galoisovy grupy / Algorithms for the computation of Galois groups

Kubát, David January 2018 (has links)
This thesis covers the topic of the computation of Galois groups over the rationals. Beginning with the classic algorithm by R. Stauduhar, we then review the theory necessary to explain the modular algorithm by K. Yokoyama. More precisely, we discuss the notion of the universal splitting ring of a polynomial. For a separable polynomial, we then study idempotents in the universal splitting ring. The modular algorithm involves computations in the ring of p-adic integers. Examples are given for polynomials of degree 3 and 4.
869

SYNTHESIS AND TESTING OF THRESHOLD LOGIC CIRCUITS

PALANISWAMY, ASHOK KUMAR 01 December 2014 (has links)
Threshold logic gates gaining more importance in recent years due to the significant development in the switching devices. This renewed the interest in synthesis and testing of circuits with threshold logic gates. Two important synthesis considerations of threshold logic circuits are addressed namely, threshold logic function identification and reducing the total number of threshold logic gates required to represent the given boolean circuit description. A fast method to identify the given Boolean function as a threshold logic function with weight assignment is introduced. It characterizes the threshold logic function based on the modified chows parameters which results in drastic reduction in time and complexity. Experiment results shown that the proposed method is at least 10 times faster for each input and around 20 times faster for 7 and 8 input, when comparing with the algorithmic based methods. Similarly, it is 100 times faster for 8 input, when comparing with asummable method. Existing threshold logic synthesis methods decompose the larger input functions into smaller input functions and perform synthesis for them. This results in increase in the number of threshold logic gates required to represent the given circuit description. The proposed implicit synthesis methods increase the size of the functions that can be handled by the synthesis algorithm, thus the number of threshold logic gates required to implement very large input function decreases. Experiment results shown that the reduction in the TLG count is 24% in the best case and 18% on average. An automatic test pattern generation approach for transition faults on a circuit consisting of current mode threshold logic gates is introduced. The generated pattern for each fault excites the maximum propagation delay at the gate (the fault site). This is a high quality ATPG. Since current mode threshold logic gate circuits are pipelined and the combinational depth at each pipeline stage is practically one. It is experimentally shown that the fault coverage for all benchmark circuits is approximately 97%. It is also shown that the proposed method is time efficient.
870

HIGH LEVEL SYNTHSIS FOR A NETWORK ON CHIP TOPOLOGY

Ali, Baraa Saeed 01 May 2013 (has links)
Network on chips (NoCs) have emerged as a panacea to solve many intercommunication issues that are imposed by the fast growing of VLSI design. NOC have been deployed as a solution for the communication delay between cores, area overhead, power consumption, etc. One of the leading parameters of speeding up the performance of system on chips (SOCs) is the efficiency of scheduling algorithms for the applications running on a SOC. In this thesis we are arguing that a global scheduling view can significantly improve latency in NoCs. This view can be achieved by having the NoC nodes communicate with each other in a predefined application-based fashion; by calculating in advance how many clock cycles the nodes need to execute and transmit packets to the network and how many clock cycles are needed for the packets to travel all the way to the destination through routers (including queuing delay). By knowing that, we could keep some of the cores stay in "Hold-On" state until the right time comes to start transmitting. This technique could lead to reduced congestion and it may guarantee that the cores do not suffer from severe resource contention, e.g. accessing memory. This task is achieved by using a network simulator (such as OPNET) and gathering statistics, so the worst case latency can be determined. Therefore, if NoC nodes can somehow postpone sending packets in a way that does not violate the deadline of their tasks, packet dropping or livelock can be avoided. It is assumed that the NoC nodes here need buffers of their own in order to hold the ready-to-transmit packets and this can be the cost of this approach.

Page generated in 0.0426 seconds