• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 25
  • 19
  • 17
  • 15
  • 15
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Algoritmo para obtenção de planos de restabelecimento para sistemas de distribuição de grande porte / Algorithm for elaboration of plans for service restoration to large-scale distribution systems

Mansour, Moussa Reda 03 April 2009 (has links)
A elaboração de planos de restabelecimento de energia (PRE) de forma rápida, para re-energização de sistemas de distribuição radiais (SDR), faz-se necessária para lidar com situações que deixam regiões dos SDR sem energia. Tais situações podem ser causadas por faltas permanentes ou pela necessidade de isolar zonas dos SDR para serviços de manutenção. Dentre os objetivos de um PRE, destacam-se: (i) reduzir o número de consumidores interrompidos (ou nenhum), e (ii) minimizar o número de manobras; que devem ser atendidos sem desrespeitar os limites operacionais dos equipamentos. Conseqüentemente, a obtenção de PRE em SDR é um problema com múltiplos objetivos, alguns conflitantes. As principais técnicas desenvolvidas para obtenção de PRE em SDR baseiam-se em algoritmos evolutivos (AE). A limitação da maioria dessas técnicas é a necessidade de simplificações na rede, para lidar com SDR de grande porte, que limitam consideravelmente a possibilidade de obtenção de um PRE adequado. Propõe-se, neste trabalho, o desenvolvimento e implantação computacional de um algoritmo para obtenção de PRE em SDR, que consiga lidar com sistemas de grande porte sem a necessidade de simplificações, isto é, considerando uma grande parte (ou a totalidade) de linhas, barras, cargas e chaves do sistema. O algoritmo proposto baseia-se em um AE multi-objetivo e na estrutura de dados, para armazenamento de grafos, denominada representação nó-profundidade (RNP), bem como em dois operadores genéticos que foram desenvolvidos para manipular de forma eficiente os dados armazenados na RNP. Em razão de se basear em um AE multi-objetivo, o algoritmo proposto possibilita uma investigação mais ampla do espaço de busca. Por outro lado, fazendo uso da RNP, para representar computacionalmente os SDR, e de seus operadores genéticos, o algoritmo proposto aumenta significativamente a eficiência da busca por adequados PRE. Isto porque aqueles operadores geram apenas configurações radiais, nas quais todos os consumidores são atendidos. Para comprovar a eficiência do algoritmo proposto, várias simulações computacionais foram realizadas, utilizando o sistema de distribuição real, de uma companhia brasileira, que possui 3.860 barras, 635 chaves, 3 subestações e 23 alimentadores. / An elaborated and fast energy restoration plan (ERP) is required to deal with steady faults in radial distribution systems (RDS). That is, after a faulted zone has been identified and isolated by the relays, it is desired to elaborate a proper ERP to restore energy on that zone. Moreover, during the normal system operation, it is frequently necessary to elaborate ERP to isolate zones to execute routine tasks of network maintenance. Some of the objectives of an ERP are: (i) very few interrupted customers (or none), and (ii) operating a minimal number of switches, while at the same time respecting security constraints. As a consequence, the service restoration is a multiple objective problem, with some degree of conflict. The main methods developed for elaboration of ERP are based on evolutionary algorithms (EA). The limitation of the majority of these methods is the necessity of network simplifications to work with large-scale RDS. In general, these simplifications restrict the achievement of an adequate ERP. This work proposes the development and implementation of an algorithm for elaboration of ERP, which can deal with large-scale RDS without requiring network simplifications, that is, considering a large number (or all) of lines, buses, loads and switches of the system. The proposed algorithm is based on a multi-objective EA, on a new graph tree encoding called node-depth encoding (NDE), as well as on two genetic operators developed to efficiently manipulate a graph trees stored in NDEs. Using a multi-objective EA, the proposed algorithm enables a better exploration of the search space. On the other hand, using NDE and its operators, the efficiency of the search is increased when the proposed algorithm is used generating proper ERP, because those operators generate only radial configurations where all consumers are attended. The efficiency of the proposed algorithm is shown using a Brazilian distribution system with 3,860 buses, 635 switches, 3 substations and 23 feeders.
92

Algoritmo para obtenção de planos de restabelecimento para sistemas de distribuição de grande porte / Algorithm for elaboration of plans for service restoration to large-scale distribution systems

Moussa Reda Mansour 03 April 2009 (has links)
A elaboração de planos de restabelecimento de energia (PRE) de forma rápida, para re-energização de sistemas de distribuição radiais (SDR), faz-se necessária para lidar com situações que deixam regiões dos SDR sem energia. Tais situações podem ser causadas por faltas permanentes ou pela necessidade de isolar zonas dos SDR para serviços de manutenção. Dentre os objetivos de um PRE, destacam-se: (i) reduzir o número de consumidores interrompidos (ou nenhum), e (ii) minimizar o número de manobras; que devem ser atendidos sem desrespeitar os limites operacionais dos equipamentos. Conseqüentemente, a obtenção de PRE em SDR é um problema com múltiplos objetivos, alguns conflitantes. As principais técnicas desenvolvidas para obtenção de PRE em SDR baseiam-se em algoritmos evolutivos (AE). A limitação da maioria dessas técnicas é a necessidade de simplificações na rede, para lidar com SDR de grande porte, que limitam consideravelmente a possibilidade de obtenção de um PRE adequado. Propõe-se, neste trabalho, o desenvolvimento e implantação computacional de um algoritmo para obtenção de PRE em SDR, que consiga lidar com sistemas de grande porte sem a necessidade de simplificações, isto é, considerando uma grande parte (ou a totalidade) de linhas, barras, cargas e chaves do sistema. O algoritmo proposto baseia-se em um AE multi-objetivo e na estrutura de dados, para armazenamento de grafos, denominada representação nó-profundidade (RNP), bem como em dois operadores genéticos que foram desenvolvidos para manipular de forma eficiente os dados armazenados na RNP. Em razão de se basear em um AE multi-objetivo, o algoritmo proposto possibilita uma investigação mais ampla do espaço de busca. Por outro lado, fazendo uso da RNP, para representar computacionalmente os SDR, e de seus operadores genéticos, o algoritmo proposto aumenta significativamente a eficiência da busca por adequados PRE. Isto porque aqueles operadores geram apenas configurações radiais, nas quais todos os consumidores são atendidos. Para comprovar a eficiência do algoritmo proposto, várias simulações computacionais foram realizadas, utilizando o sistema de distribuição real, de uma companhia brasileira, que possui 3.860 barras, 635 chaves, 3 subestações e 23 alimentadores. / An elaborated and fast energy restoration plan (ERP) is required to deal with steady faults in radial distribution systems (RDS). That is, after a faulted zone has been identified and isolated by the relays, it is desired to elaborate a proper ERP to restore energy on that zone. Moreover, during the normal system operation, it is frequently necessary to elaborate ERP to isolate zones to execute routine tasks of network maintenance. Some of the objectives of an ERP are: (i) very few interrupted customers (or none), and (ii) operating a minimal number of switches, while at the same time respecting security constraints. As a consequence, the service restoration is a multiple objective problem, with some degree of conflict. The main methods developed for elaboration of ERP are based on evolutionary algorithms (EA). The limitation of the majority of these methods is the necessity of network simplifications to work with large-scale RDS. In general, these simplifications restrict the achievement of an adequate ERP. This work proposes the development and implementation of an algorithm for elaboration of ERP, which can deal with large-scale RDS without requiring network simplifications, that is, considering a large number (or all) of lines, buses, loads and switches of the system. The proposed algorithm is based on a multi-objective EA, on a new graph tree encoding called node-depth encoding (NDE), as well as on two genetic operators developed to efficiently manipulate a graph trees stored in NDEs. Using a multi-objective EA, the proposed algorithm enables a better exploration of the search space. On the other hand, using NDE and its operators, the efficiency of the search is increased when the proposed algorithm is used generating proper ERP, because those operators generate only radial configurations where all consumers are attended. The efficiency of the proposed algorithm is shown using a Brazilian distribution system with 3,860 buses, 635 switches, 3 substations and 23 feeders.
93

Efficient algorithms for de novo assembly of alternative splicing events from RNA-seq data

Tominaga Sacomoto, Gustavo Akio 06 March 2014 (has links) (PDF)
In this thesis, we address the problem of identifying and quantifying variants (alternative splicing and genomic polymorphism) in RNA-seq data when no reference genome is available, without assembling the full transcripts. Based on the idea that each variant corresponds to a recognizable pattern, a bubble, in a de Bruijn graph constructed from the RNA-seq reads, we propose a general model for all variants in such graphs. We then introduce an exact method, called KisSplice, to extract alternative splicing events and show that it outperforms general purpose transcriptome assemblers. We put an extra effort to make KisSplice as scalable as possible. In order to improve the running time, we propose a new polynomial delay algorithm to enumerate bubbles. We show that it is several orders of magnitude faster than previous approaches. In order to reduce its memory consumption, we propose a new compact way to build and represent a de Bruijn graph. We show that our approach uses 30% to 40% less memory than the state of the art, with an insignificant impact on the construction time. Additionally, we apply the techniques developed to list bubbles in two classical problems: cycle enumeration and the K-shortest paths problem. We give the first optimal algorithm to list cycles in undirected graphs, improving over Johnson's algorithm. This is the first improvement to this problem in almost 40 years. We then consider a different parameterization of the K-shortest (simple) paths problem: instead of bounding the number of st-paths, we bound the weight of the st-paths. We present new algorithms using exponentially less memory than previous approaches
94

An Efficient, Extensible, Hardware-aware Indexing Kernel

Sadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
95

An Efficient, Extensible, Hardware-aware Indexing Kernel

Sadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives. This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms. In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
96

A Refinement-Based Methodology for Verifying Abstract Data Type Implementations

Divakaran, Sumesh January 2015 (has links) (PDF)
This thesis is about techniques for proving the functional correctness of Abstract Data Type (ADT) implementations. We provide a framework for proving the functional correctness of imperative language implementations of ADTs, using a theory of refinement. We develop a theory of refinement to reason about both declarative and imperative language implementations of ADTs. Our theory facilitates compositional reasoning about complex implementations that may use several layers of sub-ADTs. Based on our theory of refinement, we propose a methodology for proving the functional correctness of an existing imperative language implementation of an ADT. We propose a mechanizable translation from an abstract model in the Z language to an abstract implementation in VCC’s ghost language. Then we present a technique to carry out the refinement checks completely within the VCC tool. We apply our proposed methodology to prove the functional correctness of the scheduling-related functionality of FreeRTOS, a popular open-source real-time operating system. We focused on the scheduler-related functionality, found major deviations from the intended behavior, and did a machine-checked proof of the correctness of the fixed code. We also present an efficient way to phrase the refinement conditions in VCC, which considerably improves VCC’s performance. We evaluated this technique on a simplified version of FreeRTOS which we constructed for this verification exercise. Using our efficient approach, VCC always terminates and leads to a reduction of over 90% in the total time taken by a naive check, when evaluated on this case-study.
97

Learning discrete word embeddings to achieve better interpretability and processing efficiency

Beland-Leblanc, Samuel 12 1900 (has links)
L’omniprésente utilisation des plongements de mot dans le traitement des langues naturellesest la preuve de leur utilité et de leur capacité d’adaptation a une multitude de tâches. Ce-pendant, leur nature continue est une importante limite en terme de calculs, de stockage enmémoire et d’interprétation. Dans ce travail de recherche, nous proposons une méthode pourapprendre directement des plongements de mot discrets. Notre modèle est une adaptationd’une nouvelle méthode de recherche pour base de données avec des techniques dernier crien traitement des langues naturelles comme les Transformers et les LSTM. En plus d’obtenirdes plongements nécessitant une fraction des ressources informatiques nécéssaire à leur sto-ckage et leur traitement, nos expérimentations suggèrent fortement que nos représentationsapprennent des unités de bases pour le sens dans l’espace latent qui sont analogues à desmorphèmes. Nous appelons ces unités dessememes, qui, de l’anglaissemantic morphemes,veut dire morphèmes sémantiques. Nous montrons que notre modèle a un grand potentielde généralisation et qu’il produit des représentations latentes montrant de fortes relationssémantiques et conceptuelles entre les mots apparentés. / The ubiquitous use of word embeddings in Natural Language Processing is proof of theirusefulness and adaptivity to a multitude of tasks. However, their continuous nature is pro-hibitive in terms of computation, storage and interpretation. In this work, we propose amethod of learning discrete word embeddings directly. The model is an adaptation of anovel database searching method using state of the art natural language processing tech-niques like Transformers and LSTM. On top of obtaining embeddings requiring a fractionof the resources to store and process, our experiments strongly suggest that our representa-tions learn basic units of meaning in latent space akin to lexical morphemes. We call theseunitssememes, i.e., semantic morphemes. We demonstrate that our model has a greatgeneralization potential and outputs representation showing strong semantic and conceptualrelations between related words.
98

Analýza a optimalizace databázových systémů / Database System Analysis and Implementation

Třetina, Jan January 2008 (has links)
With the increase of demands for the speed and availability of the information technologies, the process of optimization gains more and more importance. Concerning search engine optimization, optimization of operating systems or application optimization (source code), the goal is to gain faster, smaller and more maintainable solution. In my thesis I deal with optimization of database systems, which includes low level of database tuning - physical organization of data and indices, database management system tuning and query optimization. I focused on optimization of Microsoft SQL Servers 2005 in enterprise environment.
99

[en] AN OPEN AND EXTENSIBLE MODELING STRATEGY FOR CREATING PLANAR SUBDIVISION MODELS FOR COMPUTATIONAL MECHANICS / [pt] UMA ESTRATÉGIA DE MODELAGEM ABERTA E EXTENSÍVEL PARA A CRIAÇÃO DE MODELOS DE SUBDIVISÕES PLANARES PARA MECÂNICA COMPUTACIONAL

15 February 2022 (has links)
[pt] Este trabalho apresenta uma estratégia de modelagem aberta e extensível, desenvolvida em Python, para a criação de modelos de subdivisões planares. A estratégia se dá na forma de uma biblioteca de modelagem geométrica, denominada HETOOL, desenvolvida no trabalho e de uso genérico, baseada na conhecida e consagrada estrutura de dados topológica Half-Edge. Além de considerar os aspectos topológicos e geométricos da modelagem, a estratégia também permite a configuração pelo usuário final dos atributos de simulação. Essas características, somadas à disponibilização do código fonte, conferem um caráter útil e relevante para o desenvolvimento de ferramentas educacionais para modelagem em mecânica computacional. Para demonstrar a aplicabilidade da estratégia proposta, foi desenvolvido um aplicativo, denominado de Finite Element Method Educational Computer Program (FEMEP), que permite a criação de modelos bidimensionais de elementos finitos, com geração de malhas por região, para diversos tipos de simulação de mecânica computacional. O pacote desenvolvido apresenta uma modelagem iterativa e dinâmica que realiza a interseção automática entres os elementos geométricos modelados. O HETOOL oferece várias funcionalidades e facilidades ao usuário, permitindo o uso do pacote mesmo sem o usuário ter conhecimento sobre os conceitos topológicos envolvidos na implementação dessa estrutura de dados. O pacote possibilita a criação e configuração atributos de forma simples e rápida a partir de um arquivo no formato JSON. Essa versatilidade na criação atributos permite a aplicação deste pacote na resolução de vários problemas presentes na engenharia e em outras áreas do meio científico. / [en] This work presents an open and extensible modeling strategy, developed in Python, for creating planar subdivision models. The strategy takes the form of a geometric modeling library called HETOOL, developed in the work and of general use, based on the well-known and renowned Half-Edge topological data structure. In addition to considering the topological and geometric aspects of the modeling, a strategy also allows for an end-user configuration of simulation attributes. These characteristics, added to the availability of the source code, provide a useful and relevant tool for the development of educational tools for modeling computational mechanics. To demonstrate the applicability of the proposed strategy, an application was developed, called the Finite Element Method Educational Computer Program (FEMEP), which allows the creation of two-dimensional finite element models, with mesh generation per region, for various types of mechanics simulation computational. The developed package presents iterative and dynamic modeling that performs an automatic intersection between the modeled geometric elements. HETOOL offers several functions and facilities to the user, allowing the use of the package even without the user having knowledge about the topological concepts involved in the implementation of this data structure. The package makes it possible to create and configure attributes simply and quickly from a file in JSON format. This versatility in creating attributes allows the application of this package to solve several problems present in engineering and in other areas of the scientific environment.
100

Hfs Plus File System Exposition And Forensics

Ware, Scott 01 January 2012 (has links)
The Macintosh Hierarchical File System Plus, HFS +, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can’t interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process. The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that iv isn’t available in other tools. We will also use Apple’s command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated.

Page generated in 0.0797 seconds