• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1918
  • 597
  • 576
  • 417
  • 240
  • 177
  • 57
  • 53
  • 40
  • 26
  • 26
  • 25
  • 24
  • 23
  • 20
  • Tagged with
  • 4803
  • 533
  • 503
  • 497
  • 429
  • 421
  • 375
  • 362
  • 354
  • 345
  • 340
  • 336
  • 318
  • 318
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Systema

Merkel, Evan Andrew 27 February 2018 (has links)
This thesis is a three-part creative coding exploration of generative typography and pixel-based image manipulation. Systema is composed of three distinct projects named Lyra, Mensa, and Vela, respectively, that investigate and demonstrate the advantages and drawbacks of generative graphic design. / Master of Fine Arts
202

Studying 3D Spherical Shell Convection using ASPECT

Euen, Grant Thomas 08 January 2018 (has links)
ASPECT is a new convection code that uses more modern and advanced solver methods than geodynamics legacy codes. I use ASPECT to calculate 2-dimensional Cartesian as well as 2- and 3-dimensional spherical-shell convection cases. All cases use the Boussinesq approximation. The 2D cases come from Blankenbach et al. (1989), van Keken et al. (1997), and Davies et al. (in preparation). Results for 2D cases agree well with their respective benchmark papers. The time-evolutions of the root mean square velocity (Vrms) and Nusselt number agree, often to within 1%. The 3D cases come from Zhong et al. (2008). Modifications were made to the simple.cc and harmonic_perturbation.cc files in the ASPECT code in order to reproduce the initial conditions and temperature-dependence of the rheology used in the benchmark. Cases are compared using both CitcomS and ASPECT with different levels of grid spacing, as well as comparing uniform grid spacing and the ASPECT default grid spacing, which refines toward the center. Results for Vrms, average temperature, and Nusselt numbers at the top and bottom of the shell range from better than 1% agreement between CitcomS and ASPECT for cases with tetragonal planforms and 7000 Rayleigh number to as much as 44% difference for cases with cubic planforms and 10^5 Rayleigh number. For all benchmarks, the top Nusselt number from ASPECT is farthest from the reported benchmark values. The 3D planform and radially averaged quantity plots agree. I present these results, as well as recommendations and possible fixes for discrepancies in the results, specifically in the Nusselt numbers, Vrms, and average temperature. / Master of Science / Mantle convection is the primary process in which heat is transferred from the interior of Earth to its exterior. It is a process that involves the physical movement of material in the mantle: hot material rises towards the surface and cools, while cold material sinks to the base and warms. This transferring of heat and energy is also the driving force behind plate tectonics, the process in which the surface of the Earth moves and changes with time. Plate tectonics is responsible for the formation of oceans, mountains, volcanoes, and trenches to name a few. Understanding the behavior of the mantle as it convects is crucial to understanding how the Earth and planetary bodies like it develop over time. In this work, I use the new modeling code ASPECT, Advanced Solver for Problems in Earths ConvecTion, to test various models in 2 and 3 dimensions. This is done to compare the results calculated by ASPECT with those of older, legacy codes for the purpose of benchmarking and growth of ASPECT. Insight is also gleaned into the large-scale factors that influence mantle convection and planetary development. My results show good agreement between results calculated by ASPECT and those of legacy codes, though there is some discrepancy in some values. The main values I present here are V<sub>RMS</sub>, the root mean square velocity, the average temperature, and the Nusselt number calculated for both the top and base of the models. In this work, I present these results and potential solutions to the discrepancies encountered.
203

Semi-automatic code-to-code transformer for Java : Transformation of library calls / Halvautomatisk kodöversättare för Java : Transformation av biblioteksanrop

Boije, Niklas, Borg, Kristoffer January 2016 (has links)
Having the ability to perform large automatic software changes in a code base gives new possibilities for software restructuring and cost savings. The possibility of replacing software libraries in a semi-automatic way has been studied. String metrics are used to find equivalents between two libraries by looking at class- and method names. Rules based on the equivalents are then used to describe how to apply the transformation to the code base. Using the abstract syntax tree, locations for replacements are found and transformations are performed. After the transformations have been performed, an evaluation of the saved effort of doing the replacement automatically versus manually is made. It shows that a large part of the cost can be saved. An additional evaluation calculating the maintenance cost saved annually by changing libraries is also performed in order to prove the claim that an exchange can reduce the annual cost for the project.
204

Exploiting abstract syntax trees to locate software defects

Shippey, Thomas Joshua January 2015 (has links)
Context. Software defect prediction aims to reduce the large costs involved with faults in a software system. A wide range of traditional software metrics have been evaluated as potential defect indicators. These traditional metrics are derived from the source code or from the software development process. Studies have shown that no metric clearly out performs another and identifying defect-prone code using traditional metrics has reached a performance ceiling. Less traditional metrics have been studied, with these metrics being derived from the natural language of the source code. These newer, less traditional and finer grained metrics have shown promise within defect prediction. Aims. The aim of this dissertation is to study the relationship between short Java constructs and the faultiness of source code. To study this relationship this dissertation introduces the concept of a Java sequence and Java code snippet. Sequences are created by using the Java abstract syntax tree. The ordering of the nodes within the abstract syntax tree creates the sequences, while small sub sequences of this sequence are the code snippets. The dissertation tries to find a relationship between the code snippets and faulty and non-faulty code. This dissertation also looks at the evolution of the code snippets as a system matures, to discover whether code snippets significantly associated with faulty code change over time. Methods. To achieve the aims of the dissertation, two main techniques have been developed; finding defective code and extracting Java sequences and code snippets. Finding defective code has been split into two areas - finding the defect fix and defect insertion points. To find the defect fix points an implementation of the bug-linking algorithm has been developed, called S + e . Two algorithms were developed to extract the sequences and the code snippets. The code snippets are analysed using the binomial test to find which ones are significantly associated with faulty and non-faulty code. These techniques have been performed on five different Java datasets; ArgoUML, AspectJ and three releases of Eclipse.JDT.core Results. There are significant associations between some code snippets and faulty code. Frequently occurring fault-prone code snippets include those associated with identifiers, method calls and variables. There are some code snippets significantly associated with faults that are always in faulty code. There are 201 code snippets that are snippets significantly associated with faults across all five of the systems. The technique is unable to find any significant associations between code snippets and non-faulty code. The relationship between code snippets and faults seems to change as the system evolves with more snippets becoming fault-prone as Eclipse.JDT.core evolved over the three releases analysed. Conclusions. This dissertation has introduced the concept of code snippets into software engineering and defect prediction. The use of code snippets offers a promising approach to identifying potentially defective code. Unlike previous approaches, code snippets are based on a comprehensive analysis of low level code features and potentially allow the full set of code defects to be identified. Initial research into the relationship between code snippets and faults has shown that some code constructs or features are significantly related to software faults. The significant associations between code snippets and faults has provided additional empirical evidence to some already researched bad constructs within defect prediction. The code snippets have shown that some constructs significantly associated with faults are located in all five systems, and although this set is small finding any defect indicators that transfer successfully from one system to another is rare.
205

L'influence du modèle français sur les codifications congolaises : cas du droit des personnes et de la famille / The influence of french model on congolese codifications : case of right persons and family law

Bokolombe, Bokina 14 December 2013 (has links)
Le Code civil français a exercé une influence considérable sur la codification civile congolaise. En 1895, par le biais de la colonisation, les Belges avaient importé au Congo le Code Napoléon qu’ils avaient eux-mêmes hérité des conquêtes de l’Empereur français. Le système juridique congolais qui jadis était basé sur le droit coutumier non écrit, fait de multiples coutumes et mœurs locales, s’était alors doté d’un Code rationnalisé calqué sur le modèle français. Après l’indépendance, le pouvoir politique congolais avait voulu remplacer le Code colonial qui était non seulement lacunaire mais surtout inadapté à la mentalité et aux traditions congolaises. Les travaux législatifs engagés notamment sur la partie relative aux droits des personnes et de la famille ont requis le recours à l’authenticité congolaise… En 1987, le législateur congolais a édicté la loi portant le Code de la famille. Ce Code qui pourtant prônait la rupture avec l’ancien Code colonial ne s’est-il pas finalement aligné sur ce même modèle contesté ? Quel choix le législateur congolais a-t-il fait entre tradition et modernité ? Quelles sont les principales nouveautés de ce Code ? Quelles critiques en a-t-on fait ? Aujourd’hui, 20 ans après son élaboration, le vieillissement du Code de la famille ne nécessite-il pas une recodification ? / The French Law has exercised significant influence on Congolese codifications; the most outstanding example is no doubt civil codifications. In reality, the Congolese legal system once based on the unwritten customary law made on multiple customs and community behaviours received through the Belgian colonization, with some adjustments, the Napoleonic Code that the Belgium has therefore received from Napoleonic conquests. This Code is also always applied in Belgium. But after the Congolese’s national independence, political power had wanted to replace the colonial Code which was the mentality and Congolese customs but still incomplete. Furthermore, the legislative work initiated on the part relating to the rights of persons and the family, which led to performing in 1987 of the Family Code, had advocated the use of the right traditional (authenticity). However, apart from the integration of a few customary institutions, this new Congolese Code is the modern fundamental (imperative of development). In fact, it renewed and even amplified the French law that associated others European rights and African postcolonial. But today, this Code has definitely aged; what might therefore be the best remedies to more valuable ? _______________________________________________________________________________________
206

IVCon: A GUI-based Tool for Visualizing and Modularizing Crosscutting Concerns

Saigal, Nalin 10 April 2009 (has links)
Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This thesis describes IVCon, a GUI-based tool that provides a novel approach to modularization of CCCs. IVCon enables users to create, examine, and modify their code in two different views, the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views.
207

Code violations and other blight indicators : a study of Colony Park/Lakeside (Austin, Texas)

Durden, Teri Deshun 11 December 2013 (has links)
Blight and the elimination thereof have profoundly impacted urban areas. In Colony Park/Lakeside (Austin, Texas), community leaders and members of the local neighborhood association have come together to mitigate and reverse social, economic, and physical symptoms of blight in their neighborhood. Following the approval of a HUD Community Challenge Planning Grant application that was submitted by the Austin Neighborhood Housing and Community Development (NHCD) department, these individuals utilized the media attention surrounding the grant to campaign for code enforcement, landlord-tenant accountability, policing, and the clean-up of illegal dumping in the area. Moreover, after much ado between residents and City workers, the neighborhood association devised a community-focused partnership with the City to ensure that current residents would reap the benefits of the planning process and help define the collective will and interests of the community. Utilizing publicly available data and first-hand knowledge from one City code compliance investigator and local residents, this report attempts to provide a blight indicator analysis of the Colony Park/Lakeside planning area as defined by NHCD. In other words, this report uses quantitative data to create descriptive maps of current neighborhood conditions with particular attention to code violations and community discussions surrounding them. The results of this work are intended to shed light on where resources should be directed to further research in the area and to resolve issues that threaten the health, safety, and viability of the neighborhood today. / text
208

Near Shannon Limit and Reduced Peak to Average Power Ratio Channel Coded OFDM

Kwak, Yongjun 24 July 2012 (has links)
Solutions to the problem of large peak to average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) systems are proposed. Although the design of PAPR reduction codewords has been extensively studied and the existence of asymptotically good codes with low PAPR has been proved, still no reduced PAPR capacity achieving code has been constructed. This is the topic of the current thesis.This goal is achieved by implementing a time-frequency turbo block coded OFDM. In this scheme, we design the frequency domain component code to have a PAPR bounded by a small number. The time domain component code is designed to obtain good performance while the decoding algorithm has reasonable complexity. Through comparative numerical evaluation we show that our method achieves considerable improvement in terms of PAPR with slight performance degradation compared to capacity achieving codes with similar block lengths. For the frequency domain component code, we used the realization of Golay sequences as cosets of the fi rst order Reed-Muller code and the modi cation of dual BCH code. A simple MAP decoding algorithm for the modi ed dual BCH code is also provided. Finally, we provide a flexible and practical scheme based on probabilistic approach to a PAPR problem. This approach decreases the PAPR without any signi cant performance loss and without any adverse impact or required change to the system. / Engineering and Applied Sciences
209

Code-switching and code-mixing in IsiZulu

Nontolwane, Grace Benedicta Ncane 24 April 2014 (has links)
M.A. (African Languages) / Please refer to full text to view abstract
210

The analysis of enumerative source codes and their use in Burrows‑Wheeler compression algorithms

McDonald, Andre Martin 10 September 2010 (has links)
In the late 20th century the reliable and efficient transmission, reception and storage of information proved to be central to the most successful economies all over the world. The Internet, once a classified project accessible to a selected few, is now part of the everyday lives of a large part of the human population, and as such the efficient storage of information is an important part of the information economy. The improvement of the information storage density of optical and electronic media has been remarkable, but the elimination of redundancy in stored data and the reliable reconstruction of the original data is still a desired goal. The field of source coding is concerned with the compression of redundant data and its reliable decompression. The arithmetic source code, which was independently proposed by J. J. Rissanen and R. Pasco in 1976, revolutionized the field of source coding. Compression algorithms that use an arithmetic code to encode redundant data are typically more effective and computationally more efficient than compression algorithms that use earlier source codes such as extended Huffman codes. The arithmetic source code is also more flexible than earlier source codes, and is frequently used in adaptive compression algorithms. The arithmetic code remains the source code of choice, despite having been introduced more than 30 years ago. The problem of effectively encoding data from sources with known statistics (i.e. where the probability distribution of the source data is known) was solved with the introduction of the arithmetic code. The probability distribution of practical data is seldomly available to the source encoder, however. The source coding of data from sources with unknown statistics is a more challenging problem, and remains an active research topic. Enumerative source codes were introduced by T. J. Lynch and L. D. Davisson in the 1960s. These lossless source codes have the remarkable property that they may be used to effectively encode source sequences from certain sources without requiring any prior knowledge of the source statistics. One drawback of these source codes is the computationally complex nature of their implementations. Several years after the introduction of enumerative source codes, J. G. Cleary and I. H. Witten proved that approximate enumerative source codes may be realized by using an arithmetic code. Approximate enumerative source codes are significantly less complex than the original enumerative source codes, but are less effective than the original codes. Researchers have become more interested in arithmetic source codes than enumerative source codes since the publication of the work by Cleary and Witten. This thesis concerns the original enumerative source codes and their use in Burrows–Wheeler compression algorithms. A novel implementation of the original enumerative source code is proposed. This implementation has a significantly lower computational complexity than the direct implementation of the original enumerative source code. Several novel enumerative source codes are introduced in this thesis. These codes include optimal fixed–to–fixed length source codes with manageable computational complexity. A generalization of the original enumerative source code, which includes more complex data sources, is proposed in this thesis. The generalized source code uses the Burrows–Wheeler transform, which is a low–complexity algorithm for converting the redundancy of sequences from complex data sources to a more accessible form. The generalized source code effectively encodes the transformed sequences using the original enumerative source code. It is demonstrated and proved mathematically that this source code is universal (i.e. the code has an asymptotic normalized average redundancy of zero bits). AFRIKAANS : Die betroubare en doeltreffende versending, ontvangs en berging van inligting vorm teen die einde van die twintigste eeu die kern van die mees suksesvolle ekonomie¨e in die wˆereld. Die Internet, eens op ’n tyd ’n geheime projek en toeganklik vir slegs ’n klein groep verbruikers, is vandag deel van die alledaagse lewe van ’n groot persentasie van die mensdom, en derhalwe is die doeltreffende berging van inligting ’n belangrike deel van die inligtingsekonomie. Die verbetering van die bergingsdigteid van optiese en elektroniese media is merkwaardig, maar die uitwissing van oortolligheid in gebergde data, asook die betroubare herwinning van oorspronklike data, bly ’n doel om na te streef. Bronkodering is gemoeid met die kompressie van oortollige data, asook die betroubare dekompressie van die data. Die rekenkundige bronkode, wat onafhanklik voorgestel is deur J. J. Rissanen en R. Pasco in 1976, het ’n revolusie veroorsaak in die bronkoderingsveld. Kompressiealgoritmes wat rekenkundige bronkodes gebruik vir die kodering van oortollige data is tipies meer doeltreffend en rekenkundig meer effektief as kompressiealgoritmes wat vroe¨ere bronkodes, soos verlengde Huffman kodes, gebruik. Rekenkundige bronkodes, wat gereeld in aanpasbare kompressiealgoritmes gebruik word, is ook meer buigbaar as vroe¨ere bronkodes. Die rekenkundige bronkode bly na 30 jaar steeds die bronkode van eerste keuse. Die probleem om data wat afkomstig is van bronne met bekende statistieke (d.w.s. waar die waarskynlikheidsverspreiding van die brondata bekend is) doeltreffend te enkodeer is opgelos deur die instelling van rekenkundige bronkodes. Die bronenkodeerder het egter selde toegang tot die waarskynlikheidsverspreiding van praktiese data. Die bronkodering van data wat afkomstig is van bronne met onbekende statistieke is ’n groter uitdaging, en bly steeds ’n aktiewe navorsingsveld. T. J. Lynch and L. D. Davisson het tel–bronkodes in die 1960s voorgestel. Tel– bronkodes het die merkwaardige eienskap dat bronsekwensies van sekere bronne effektief met hierdie foutlose kodes ge¨enkodeer kan word, sonder dat die bronenkodeerder enige vooraf kennis omtrent die statistieke van die bron hoef te besit. Een nadeel van tel–bronkodes is die ho¨e rekenkompleksiteit van hul implementasies. J. G. Cleary en I. H. Witten het verskeie jare na die instelling van tel–bronkodes bewys dat benaderde tel–bronkodes gerealiseer kan word deur die gebruik van rekenkundige bronkodes. Benaderde tel–bronkodes het ’n laer rekenkompleksiteit as tel–bronkodes, maar benaderde tel–bronkodes is minder doeltreffend as die oorspronklike tel–bronkodes. Navorsers het sedert die werk van Cleary en Witten meer belangstelling getoon in rekenkundige bronkodes as tel–bronkodes. Hierdie tesis is gemoeid met die oorspronklike tel–bronkodes en die gebruik daarvan in Burrows–Wheeler kompressiealgoritmes. ’n Nuwe implementasie van die oorspronklike tel–bronkode word voorgestel. Die voorgestelde implementasie het ’n beduidende laer rekenkompleksiteit as die direkte implementasie van die oorspronklike tel–bronkode. Verskeie nuwe tel–bronkodes, insluitende optimale vaste–tot–vaste lengte tel–bronkodes met beheerbare rekenkompleksiteit, word voorgestel. ’n Veralgemening van die oorspronklike tel–bronkode, wat meer komplekse databronne insluit as die oorspronklike tel–bronkode, word voorgestel in hierdie tesis. The veralgemeende tel–bronkode maak gebruik van die Burrows–Wheeler omskakeling. Die Burrows–Wheeler omskakeling is ’n lae–kompleksiteit algoritme wat die oortolligheid van bronsekwensies wat afkomstig is van komplekse databronne omskakel na ’n meer toeganklike vorm. Die veralgemeende bronkode enkodeer die omgeskakelde sekwensies effektief deur die oorspronklike tel–bronkode te gebruik. Die universele aard van hierdie bronkode word gedemonstreer en wiskundig bewys (d.w.s. dit word bewys dat die kode ’n asimptotiese genormaliseerde gemiddelde oortolligheid van nul bisse het). Copyright / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted

Page generated in 0.0563 seconds