• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 6
  • 6
  • 1
  • 1
  • Tagged with
  • 23
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Power and Memory Efficient Hashing Schemes for Some Network Applications

Yu, Heeyeol 2009 May 1900 (has links)
Hash tables (HTs) are used to implement various lookup schemes and they need to be efficient in terms of speed, space utilization, and power consumptions. For IP lookup, the hashing schemes are attractive due to their deterministic O(1) lookup performance and low power consumptions, in contrast to the TCAM and Trie based approaches. As the size of IP lookup table grows exponentially, scalable lookup performance is highly desirable. For next generation high-speed routers, this is a vital requirement when IP lookup remains in the critical data path and demands a predictable throughput. However, recently proposed hash schemes, like a Bloomier filter HT and a Fast HT (FHT) suffer from a number of flaws, including setup failures, update overheads, duplicate keys, and pointer overheads. In this dissertation, four novel hashing schemes and their architectures are proposed to address the above concerns by using pipelined Bloom filters and a Fingerprint filter which are designed for a memory-efficient approximate match. For IP lookups, two new hash schemes such as a Hierarchically Indexed Hash Table (HIHT) and Fingerprint-based Hash Table (FPHT) are introduced to achieve a a perfect match is assured without pointer overhead. Further, two hash mechanisms are also proposed to provide memory and power efficient lookup for packet processing applications. Among four proposed schemes, the HIHT and the FPHT schemes are evaluated for their performance and compared with TCAM and Trie based IP lookup schemes. Various sizes of IP lookup tables are considered to demonstrate scalability in terms of speed, memory use, and power consumptions. While an FPHT uses less memory than an HIHT, an FPHT-based IP lookup scheme reduces power consumption by a factor of 51 and requires 1.8 times memory compared to TCAM-based and trie-based IP lookup schemes, respectively. In dissertation, a multi-tiered packet classifier has been proposed that saves at most 3.2 times power compared to the existing parallel packet classifier. Intrinsic hashing schemes lack of high throughput, unlike partitioned Ternary Content Addressable Memory (TCAM)-based scheme that are capable of parallel lookups despite large power consumption. A hybrid CAM (HCAM) architecture has been introduced. Simulation results indicate HCAM to achieve the same throughput as contemporary schemes while it uses 2.8 times less memory and 3.6 times less power compared to the contemporary schemes.
2

Scrabble / Scrabble

Picek, Radomír January 2008 (has links)
This thesis describes the social table game Scrabble, and its realization in the form of computer games. Gradually examines all important aspects that affect the performance of the implementation. Especially after the election of the appropriate data structures retained for the vocabulary, affecting the efficiency of generating moves, and the selection of appropriate algorithms with regard to the maximum speed. There is particular emphasis on artificial intelligence opponent and its ability to compete not only amateurs, but professional SCRABBLE players.
3

Efficient Skyline Community Discovery in Large Networks

Akber, Mohammad Ali 30 August 2022 (has links)
Every entity in the real world can be described uniquely by it’s attributes. It is possible to rank similar entities based on these attributes, i.e. a professor can be ranked by his/her number of publications, citations etc. A community is formed by a group of connected entities. Individual ranking of an entity plays an important role in the quality of a community. Skyline community in a network represents the highest ranked communities in the network. But how do we define this ranking? Ranking system in some model considers only a single attribute [16], whereas the other [15] [23] considers multiple attributes. Intuitively multiple attributes represent a community better and produce good results. We propose a novel community discovery model, which considers multiple attribute when ranking the community and is efficient in terms of computation time and result size. We use a progressive (can produce re- sults gradually without depending on the future processing) algorithm to calculate the community in an order such that a community is guaranteed not to be dominated by those generated after it. And to verify the dominance relationship between two communities, we came up with a range based comparison where the dominance rela- tionship is decided by the set of nodes each group dominates. If domination list of a group is a subset of another group, we say the second group dominates the first. Because a groups domination list contains it’s member along with the nodes they dominate. So in the example, the second group dominates every node of the first group. / Graduate
4

Korektor diakritiky / Automatic Generator of Diacritics

Veselý, Lukáš January 2007 (has links)
The goal of this diploma work is the suggestion and the implementation of the application, which allows adding / removing of diacritics into / from Czech written text. Retrieval "trie" structure is described along with its relation to finite state automata. Further, algorithm for minimization of finite state automata is described and various methods for adding diacritics are discussed. In practical part the implementation in Java programming language with usage of object-oriented approach is given. Achieved results are evaluated and analysed in the conclusion.
5

Phonotactic Structures in Swedish : A Data-Driven Approach

Hultin, Felix January 2017 (has links)
Ever since Bengt Sigurd laid out the first comprehensive description of Swedish phonotactics in 1965, it has been the main point of reference within the field. This thesis attempts a new approach, by presenting a computational and statistical model of Swedish phonotactics, which can be built by any corpus of IPA phonetic script. The model is a weighted trie, represented as a finite state automaton, where states are phonemes linked by transitions in valid phoneme sequences, which adds the benefits of being probabilistic and expressible by regular languages. It was implemented using the Nordisk Språkteknologi (NST) pronunciation lexicon and was used to test against a couple of rulesets defined in Sigurd relating to initial two consonant clusters of phonemes and phoneme classes. The results largely agree with Sigurd's rules and illustrated the benefits of the model, in that it effectively can be used to pattern match against phonotactic information using regular expression-like syntax. / Ända sedan Bengt Sigurd lade fram den första övergripande beskrivningen av svensk fonotax 1965, så har den varit den främsta referenspunkten inom fältet. Detta examensarbete försöker sig på en ny infallsvinkel genom att presentera en beräkningsbar och statistisk modell av svensk fonotax som kan byggas med en korpus av fonetisk skrift i IPA. Modellen är en viktad trie, representerad som en ändlig automat, vilket har fördelarna av att vara probabilistisk och kunna beskrivas av reguljära språk. Den implementerades med hjälp av uttalslexikonet från Nordisk Språkteknologi (NST) och användes för att testa ett par regelgrupper av initiala två-konsonant kluster av fonem och fonemklasser definierad av Sigurd. Resultaten stämmer till större del överens med Sigurds regler och visar på fördelarna hos modellen, i att den effektivt kan användas för att matcha mönster av fonotaktisk information med hjälp av en liknande syntax för reguljära uttryck.
6

Hierarchická komprese / Hierarchical compression

Kreibichová, Lenka January 2011 (has links)
The most of existing text compression methods is based on the same base concept. First the Input text is divided into sequence of text units. These text units cat be single symbols, syllables or words. When compressing large text files, searching for redundancies over longer text units is usually more effective than searching over the shorter ones. But if we choose words as base units we cannot anymore catch redundancies over symbols and syllables. In this paper we propose a new text compression method called Hierarchical compresssion. It constructs hierarchical grammar to store redundancies over syllables, words and upper levels of text. The code of the text then consists of code of this grammer. We proposed a strategy for constructing hierarchical grammar for concrete input text and we proposed an effective way how to encode it. Above mentioned our proposed method is compared with some other common methods of text compression.
7

Enhancing symbolic execution using memoization and incremental techniques

Yang, Guowei, active 2013 20 September 2013 (has links)
The last few years have seen a resurgence of interest in the use of symbolic execution--program analysis technique developed more than three decades ago to analyze program execution paths. However, symbolic execution remains an expensive technique and scaling it remains a key technical challenge. There are two key factors that contribute to its cost: (1) the number of paths that need to be explored and (2) the cost of constraint solving, which is typically required for each path explored. Our insight is that the cost of symbolic execution can be reduced by an incremental approach, which uses static analysis and dynamic analysis to focus on relevant parts of code and reuse previous analysis results, thereby addressing both the key cost factors of symbolic execution. This dissertation presents Memoized Incremental Symbolic Execution, a novel approach that embodies our insight. Using symbolic execution in practice often requires several successive runs of the technique on largely similar underlying problems where successive problems differ due to some change, which may be to code, e.g., to fix a bug, to analysis parameters, e.g., to increase the path exploration depth, or to correctness properties, e.g., to check against stronger specifications that are written as assertions in code. Memoized Incremental Symbolic Execution, a three-fold approach, leverages the similarities in the successive problems to reduce the total cost of applying the technique. Our prototype tool-set is based on the Symbolic PathFinder. Experimental results show that Memoized Incremental Symbolic Execution enhances the efficacy of symbolic execution. / text
8

Optical characterization of Polar winter aerosols and clouds

Baibakov, Konstantin January 2014 (has links)
R??sum?? : L???Arctique est particuli??rement sensible aux changements climatiques et a r??cemment subi des modifications majeures incluant une diminution dramatique de l???extension de la glace de mer. Notre capacit???? a?? mod??liser et a?? potentiellement r??duire les changements climatiques est limit??e, en partie, par les incertitudes associe??es au forc??age radiatif induit par les effets directs et indirects des ae??rosols, qui de??pendent de notre compre??hension des processus impliquant les nuages et les ae??rosols. La charge des ae??rosols est caracte??rise??e par l???e??paisseur optique des ae??rosols (AOD) qui est le parame??tre radiatif extensif le plus important et l???indicateur re??gional du comportement des ae??rosols sans doute le plus de??cisif. Une de nos lacunes majeures dans la compre??hension des ae??rosols arctiques est leur comportement durant l???hiver polaire. Cela est principalement du?? au manque de mesures nocturnes d???AOD. Dans ce travail, on utilise des instruments (lidar et photome??tre stellaire) installe??s en Arctique pour mesurer, respectivement, les profils verticaux des ae??rosols et une valeur inte??gre??e dans la colonne (AOD) de ces profils. En outre, les donne??es d???un lidar spatial (CALIOP) sont utilise??es pour fournir un contexte pan-arctique et des statistiques saisonnie??res pour supporter les mesures au sol. Ces dernie??res ont e??te?? obtenues aux stations arctiques d???Eureka (80??? N, 86??? W) et de Ny A??lesund (79??? N, 12??? E) durant les hivers polaires de 2010-2011 et 2011-2012. L???importance physique des pe- tites variations d???amplitude de l???AOD est typique de l???hiver polaire en Arctique, mais suppose une ve??rification pour s???assurer que des artefacts ne contribuent pas a?? ces variations (par exemple un masque de nuage insuffisant). Une analyse des processus base??e sur des e??ve??nements (avec une re??solution temporelle ??? une minute) est essentielle pour s???assurer que les parame??tres optiques et microphysiques extensifs (grossiers) et intensifs (par particules) sont cohe??rents et physiquement conformes. La synergie photom??tre stellaire-lidar nous permet de caracte??riser plusieurs e??ve??nements distincts au cours des pe??riodes de mesures, en particulier : des ae??rosols, des cristaux de glace, des nuages fins et des nuages polaires stratosphe??riques (PSC). Dans l???ensemble, les modes fin (<1??m) et grossier (>1??m) de l???AOD obtenus par photome??trie stellaire (??[indice inf??rieur f] et ??[indice inf??rieur c]) sont cohe??rents avec leurs analogues produits a?? partir des profils inte??gre??s du lidar. Cependant certaines inconsistances cause??es par des facteurs instrumentaux et environnementaux ont aussi e??te?? trouve??es. La division de l???AOD du photome??tre stellaire ??[indice inf??rieur f] et ??[indice inf??rieur c] a e??te?? davantage exploite??e afin d???e??liminer les e??paisseurs optiques du mode grossier (le filtrage spectral de nuages) et, par la suite, de comparer ??[indice inf??rieur]f avec les AODs obtenues par le filtrage de nuages traditionnel (temporel). Alors que les filtrages temporel et spectral des nuages des cas e??tudie??s au niveau des processus ont conduit a?? des re??sultats bons a?? mode??re??s en termes de cohe??rence entre les donne??es filtre??es spectralement et temporellement (les e??paisseurs optiques des photome??tres stellaires et lidars e??tant toutes deux filtre??es temporellement), les re??sultats saisonniers semblent e??tre encore contamine??s par les nuages. En imposant un accord en utilisant un second filtre, plus restrictif, avec un crite??re de ciel clair ("enveloppe minimale du nuage"), les valeurs saisonnie??res moyennes obtenues e??taient de 0.08 a?? Eureka et 0.04 a?? Ny A??lesund durant l???hiver 2010-2011. En 2011-2012, ces valeurs e??taient, respectivement, de 0.12 et 0.09. En revanche les valeurs d???e??paisseur optique de CALIOP (estime??es entre 0 et 8 km) ont le??ge??rement diminue?? de 2010-2011 a?? 2011-2012 (0.04 vs. 0.03). // Abstract : The Arctic region is particularly sensitive to climate change and has recently undergone major alterations including a dramatic decrease of sea-ice extent. Our ability to model and potentially mitigate climate change is limited, in part, by the uncertainties associated with radiative forcing due to direct and indirect aerosol effects which in turn are dependent on our understanding of aerosol and cloud processes. Aerosol loading can be characterized by aerosol optical depth (AOD) which is the most important (extensive or bulk) aerosol radiative parameter and arguably the most important regional indicator of aerosol behavior. One of the most important shortcomings in our understanding of Arctic aerosols is their behavior during the Polar winter. A major reason for this is the lack of night-time AOD measurements. In this work we use lidar and starphotometry instruments in the Arctic to obtain vertically resolved aerosol profiles and column integrated representations of those profiles (AODs) respectively. In addition, data from a space-borne lidar (CALIOP) is used to provide a pan-Arctic context and seasonal statistics in support of ground based measurements. The latter were obtained at the Eureka (80 ??? N, 86 ??? W) and Ny ??lesund (79 ??? N, 12 ??? E) high Arctic stations during the Polar Winters of 2010-11 and 2011-12. The physical significance of the variation of the small-amplitude AODs that are typical of the Arctic Polar Winter, requires verification to ensure that artifactual contributions (such as incomplete cloud screening) do not contribute to these variations. A process-level event-based analysis (with a time resolution of ??? minutes), is essential to ensure that extracted extensive (bulk) and intensive (per particle) optical and microphysical indicators are coherent and physically consistent. Using the starphotometry-lidar synergy we characterized several distinct events throughout the measurement period: these included aerosol, ice crystal, thin cloud and polar stratospheric cloud (PSC) events. In general fine (<1 ??m ) and coarse (>1 ??m )modeAODs from starphotometry ( ??[subscript f] and ?? [subscript c] ) were coherent with their lidar analogues produced from integrated profiles : however several inconsistencies related to instrumental and environmental factors were also found. The division of starphotometer AODs into ??[subscript ]f and ?? [subscript c] components was further exploited to eliminate coarse mode cloud optical depths (spectral cloud screening) and subsequently compare ?? [subscript f] with cloud-screened AODs using a traditional (temporal based) approach. While temporal and spectral cloud screening case studies at process level resolutions yielded good to moderate results in terms of the coherence between spectrally and temporally cloud screened data (both temporally screened starphotometer and lidar optical depths), seasonal results apparently still contained cloud contaminated data. Forcing an agreement using a more restrictive, second-pass, clear sky criterion ("minimal cloud envelope") produced mean 2010-11 AOD seasonal values of 0.08 and 0.04 for Eureka and Ny ??lesund respectively. In 2011-12 these values were 0.12 and 0.09. Conversely, CALIOP AODs (0 to 8 km) for the high Arctic showed a slight decrease from 2010-2011 to 2011-2012 (0.04 vs 0.03).
9

Jonctions Josephson en rampe entre un cuprate dop?? aux ??lectrons et un supraconducteur conventionnel

Gaudet, Jonathan January 2014 (has links)
L?????laboration d???exp??rience permettant de sonder la sym??trie du gap supraconducteur ?? l???aide d???une mesure de la phase de ce gap supraconducteur est l???une des techniques les plus directes pour observer la sym??trie ???d??? des cuprates dop??s au trous. Malheureusement, il existe tr??s peu d???exp??riences de ce type qui ont ??t?? r??ussies pour sonder la sym??trie du gap supraconducteur dans les cuprates dop??s aux ??lectrons. Effectivement, les exp??riences sondant la phase du gap supraconducteur demandent d???utiliser g??n??ralement des jonctions Josephson entre un cuprate et un supraconducteur conventionnel (Exemple : SQUID et jonctions Josephson en coin). Cependant, il est extr??mement difficile d???obtenir de telles jonctions Josephson avec les cuprates dop??s aux ??lectrons, car la croissance de ce mat??riau est extr??mement difficile et les propri??t??s physiques de ceux-ci sont tr??s sensibles aux diff??rentes ??tapes de fabrication que l???on doit effectuer pour obtenir une jonction Josephson. Cependant, de r??cents travaux effectu??s par notre groupe sur la purification des phases dans les couches minces de Pr[indice inf??rieur 2???x]Ce[indice inf??rieur x]CuO[indice inf??rieur 4], un cuprate dop?? aux ??lectrons, ainsi que sur la production de jonctions Josephson de qualit?? entre deux ??lectrodes supraconductrices de Pr[indice inf??rieur 2???x]Ce[indice inf??rieur x]CuO[indice inf??rieur 4] ont revigor?? l???int??r??t de fabriquer une jonction Josephson de qualit?? entre Pr[indice inf??rieur 2???x]Ce[indice inf??rieur x]CuO[indice inf??rieur 4] et un supraconducteur conventionnel. Dans ce m??moire, on propose une m??thode de fabrication de jonctions Josephson en rampe entre un cuprate dop?? aux ??lectrons (Pr[indice inf??rieur 1.85]Ce[indice inf??rieur 0.15]CuO[indice inf??rieur 4]) et un supraconducteur conventionnel (PbIn). Cette m??thode de fabrication nous a permis de fabriquer des jonctions Josephson poss??dant une densit?? de courant critique de 44 A/cm[indice sup??rieur 2] et un produit I[indice inf??rieur c]R[indice inf??rieur n] valant 40 ??V . On retrouve aussi, tel qu???attendu par la th??orie, les oscillations du courant critique de ces jonctions en fonction du champ magn??tique appliqu?? perpendiculairement sur celles-ci. Ces caract??ristiques nous permettent de conclure que nous avons r??ussi ?? produire les meilleures jonctions Josephson de ce type (Re[indice inf??rieur 2???x]Ce[indice inf??rieur x]CuO[indice inf??rieur 4] / Au /supraconducteur m??tallique) r??pertori??es dans la litt??rature. Ainsi, d???apr??s ces r??sultats il est maintenant possible de tenter l???exp??rience sondant la sym??trie du gap supraconducteur dans le Pr[indice inf??rieur 1.85]Ce[indice inf??rieur 0.15]CuO[indice inf??rieur 4] ?? l???aide d???une jonction Josephson en coin.
10

Implementation och jämförelse av ordnade associativa arrayer

Björklund, Rebecka January 2011 (has links)
Detta examensarbete har som syfte att titta på om strukturer, så som van Emde Boas och y-fastträd är snabbare än en standardstruktur som binärt trie på att göra IP-uppslagningar i routingtabeller vid vidarebefordring av paket i nätverk. Detta är en av de mest utförda operationerna i dag. Den utförs varje gång ett paket passerar en router och går ut på att hitta den mest lämpliga vägen för paketet att ta sig till värden. Det är i denna operation ett framtida problem kan uppkomma på grund av den ständigt ökande trafiken över nätverken. Att minska tiden för IP-uppslagning med hjälp av strukturerna van Emde Boas eller y-fast kan vara en dellösning för att undvika att routern blir en framtida flaskhals. Resultaten från java-implementationerna visar dock att varken van Emde Boas eller y-fastträd genererar ett bättre resultat än ett binärt trie, trots att uppslagning i dessa strukturer har lägre asymptotisk tidskomplexitet än uppslagning i ett binärt trie. Det finns olika anledningar till att det är så; ett är att de routingtabeller som används ej är tillräckligt stora för att van Emde Boas- eller y-faststrukturernas fördelar ska visas. En annan orsak är att fler minnesaccesser till minnet görs i dessa jämfört med det binära triet. En gräns som länge ifrågasatts är om datagenomströmningen för en router kan överstiga en gigabyte per sekund(GB/s) genom att endast ändra routerns mjukvara och köra denna på standardhårdvara. Detta examensarbete och flera andra arbeten visar att det går att öka datagenomströmningen med lämplig implementation av routingtabellerna och IP-uppslagning. Trots att van Emde Boas eller y-fastträdet inte är bättre än det binära triet i antalet uppslagningar per sekund, visar van Emde Boas träd och det binära triet att dataöverföring i GB/s är möjliga att göra i mjukvara.

Page generated in 0.0524 seconds