• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 120
  • 44
  • 43
  • 22
  • 10
  • 9
  • 7
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 601
  • 74
  • 50
  • 43
  • 41
  • 41
  • 40
  • 39
  • 39
  • 38
  • 37
  • 37
  • 36
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

On the Existence of a Second Hamilton Cycle in Hamiltonian Graphs With Symmetry

Wagner, Andrew January 2013 (has links)
In 1975, Sheehan conjectured that every simple 4-regular hamiltonian graph has a second Hamilton cycle. If Sheehan's Conjecture holds, then the result can be extended to all simple d-regular hamiltonian graphs with d at least 3. First, we survey some previous results which verify the existence of a second Hamilton cycle if d is large enough. We will then demonstrate some techniques for finding a second Hamilton cycle that will be used throughout this paper. Finally, we use these techniques and show that for certain 4-regular Hamiltonian graphs whose automorphism group is large enough, a second Hamilton cycle exists.
82

Proton Nuclear Magnetic Resonance in Mica

Townsend, Don H. 05 1900 (has links)
The experiments to be described here were undertaken for the purpose of determining, if possible, by NMR techniques whether or not the hydroxyl protons in mica are bound in a regular crystalline array, and, if so, whether or not the hydroxyl protons occur in reasonably isolated pairs as in waters of hydration.
83

Flexible finite automata-based algorithms for detecting microsatellites in DNA

De Ridder, Corne 17 August 2010 (has links)
Apart from contributing to Computer Science, this research also contributes to Bioinformatics, a subset of the subject discipline Computational Biology. The main focus of this dissertation is the development of a data-analytical and theoretical algorithm to contribute to the analysis of DNA, and in particular, to detect microsatellites. Microsatellites, considered in the context of this dissertation, refer to consecutive patterns contained by genomic sequences. A perfect tandem repeat is defined as a string of nucleotides which is repeated at least twice in a sequence. An approximate tandem repeat is a string of nucleotides repeated consecutively at least twice, with small differences between the instances. The research presented in this dissertation was inspired by molecular biologists who were discovered to be visually scanning genetic sequences in search of short approximate tandem repeats or so called microsatellites. The aim of this dissertation is to present three algorithms that search for short approximate tandem repeats. The algorithms comprise the implementation of finite automata. Thus the hypothesis posed is as follows: Finite automata can detect microsatellites effectively in DNA. "Effectively" includes the ability to fine-tune the detection process so that redundant data is avoided, and relevant data is not missed during search. In order to verify whether the hypothesis holds, three theoretical related algorithms have been proposed based on theorems from finite automaton theory. They are generically referred to as the FireìSat algorithms. These algorithms have been implemented, and the performance of FireìSat2 has been investigated and compared to other software packages. From the results obtained, it is clear that the performance of these algorithms differ in terms of attributes such as speed, memory consumption and extensibility. In respect of speed performance, FireìSat outperformed rival software packages. It will be seen that the FireìSat algorithms have several parameters that can be used to tune their search. It should be emphasized that these parameters have been devised in consultation with the intended user community, in order to enhance the usability of the software. It was found that the parameters of FireìSat can be set to detect more tandem repeats than rival software packages, but also tuned to limit the number of detected tandem repeats. Copyright / Dissertation (MSc)--University of Pretoria, 2010. / Computer Science / unrestricted
84

Process-based decomposition and multicore performance : case studies from Stringology

Strauss, Marthinus David January 2017 (has links)
Current computing hardware supports parallelism at various levels. Conventional programming techniques, however, do not utilise efficiently this growing resource. This thesis seeks a better fit between software and current hardware while following a hardware-agnostic software development approach. This allows the programmer to remain focussed on the problem domain. The thesis proposes process-based problem decomposition as a natural way to structure a concurrent implementation that may also improve multicore utilisation and, consequently, run-time performance. The thesis presents four algorithms as case studies from the domain of string pattern matching and finite automata. Each case study is conducted in the following manner. The particular sequential algorithm is decomposed into a number of communicating concurrent processes. This decomposition is described in the process algebra CSP. Hoare's CSP was chosen as one of the best known process algebras, for its expressive power, conciseness, and overall simplicity. Once the CSP-based process description has brought ideas to a certain level of maturity, the description is translated into a process-based implementation. The Go programming language was used for the implementation as its concurrency features were inspired by CSP. The performance of the process-based implementation is then compared against its conventional sequential version (also provided in Go). The goal is not to achieve maximal performance, but to compare the run-time performance of an ``ordinary'' programming effort that focussed on a process-based solution over a conventional sequential implementation. Although some implementations did not perform as well as others, some did significantly outperform their sequential counterparts. The thesis thus provides prima facie evidence that a process-based decomposition approach is promising for achieving a better fit between software and current multicore hardware. / Thesis (PhD)--University of Pretoria, 2017. / Computer Science / PhD / Unrestricted
85

Finite state automaton construction through regular expression hashing

Coetser, Rayner Johannes Lodewikus 25 August 2010 (has links)
In this study, the regular expressions forming abstract states in Brzozowski’s algorithm are not remapped to sequential state transition table addresses as would be the case in the classical approach, but are hashed to integers. Two regular expressions that are hashed to the same hash code are assigned the same integer address in the state transition table, reducing the number of states in the automaton. This reduction does not necessarily lead to the construction of a minimal automaton: no restrictions are placed on the hash function hashing two regular expressions to the same code. Depending on the quality of the hash function, a super-automaton, previously referred to as an approximate automaton, or an exact automaton can be constructed. When two regular expressions are hashed to the same state, and they do not represent the same regular language, a super-automaton is constructed. A super-automaton accepts the regular language of the input regular expression, in addition to some extra strings. If the hash function is bad, many regular expressions that do not represent the same regular language will be hashed together, resulting in a smaller automaton that accepts extra strings. In the ideal case, two regular expressions will only be hashed together when they represent the same regular language. In this case, an exact minimal automaton will be constructed. It is shown that, using the hashing approach, an exact or super-automaton is always constructed. Another outcome of the hashing approach is that a non-deterministic automaton may be constructed. A new version of the hashing version of Brzozowski’s algorithm is put forward which constructs a deterministic automaton. A method is also put forward for measuring the difference between an exact and a super-automaton: this takes the form of the k-equivalence measure: the k-equivalence measure measures the number of characters up to which the strings of two regular expressions are equal. The better the hash function, the higher the value of k, up to the point where the hash function results in regular expressions being hashed together if and only if they have the same regular language. Using the k-equivalence measure, eight generated hash functions and one hand coded hash function are evaluated for a large number of short regular expressions, which are generated using G¨odel numbers. The k-equivalence concept is extended to the average k-equivalence value in order to evaluate the hash functions for longer regular expressions. The hand coded hash function is found to produce good results. Copyright / Dissertation (MEng)--University of Pretoria, 2009. / Computer Science / unrestricted
86

Modernizing the Syntax of Regular Expressions

Andersson, Adam, Hansson, Ludwig January 2020 (has links)
Context Writing and working with regular expressions could be a slow and tedious task,which is mainly because of its syntax, but also because there exist several different dialectswhich easily could cause confusion. Even though regular expression has been widely used forparsing and programming language design, they are now frequently used for input validationand seen in common applications such as text editors. Objectives The main objectives of our thesis are to determine whether or not a regularexpression language that is more like the surrounding programming language would increaseusability, readability, and maintainability. We will then investigate further into what kind ofimpact this would have regarding e.g, development speed, and what benefits and liabilities amore modernized syntax could introduce. Methods Two different methods were used to answer our research questions, exploratory in-terviews, and experiments. The data from the experiments were collected by screen recordingand a program in the environment we provided to the participant. Results.By doing interviews with developers that use traditional regular expressions on aregular basis, their stories confirm that its syntax is confusing even for developers with alot of experience. Our results from the experiment indicate that a language more like thesurrounding language increases both the overall ease of use and development speed. Conclusions From this research, we can conclude that a regular expression language thatis more like the surrounding programming language does increase usability, readability, andmaintainability. We could clearly see that it had a positive effect on the development speed aswell. Keywords — regular expressions, programming language design, readability
87

Concesiones de transporte público regular de viajeros por carretera y su dependencia territorial. Evolución, contexto socioeconómico y modelo desarrollado en la provincia de Alicante

Ríos Pérez, Manuel 05 November 2020 (has links)
Los siglos XIX y XX han sido testigos de una evolución extraordinaria, tanto en el aspecto demográfico como económico, consolidándose la Revolución Industrial, También han sido el escenario en el que se ha desarrollado el transporte moderno, basado en la motorización, primero en base al vapor y posteriormente en base a los derivados del petróleo y de la electricidad. El ferrocarril es el primer modo revolucionario que cambia la forma de moverse en el medio terrestre, aunque a principios del siglo XX aparece de manera imparable el automóvil, que alcanzará un protagonismo indiscutible hasta nuestros días. La evolución de estos modos de transporte y su relación con el territorio de la provincia de Alicante constituyen la parte fundamental de nuestro trabajo de investigación, que hemos acotado al desarrollo del sistema de transporte público regular interurbano de viajeros por carretera. La actividad del transporte consiste en el traslado de personas y mercancías de un lugar a otro, para lo cual precisa de unas infraestructuras adecuadas para que circulen los vehículos portadores. En lo referente al transporte, sobre todo de personas, tienen un carácter determinante las condiciones demográficas y orográficas del territorio, que son peculiares en la provincia de Alicante. Nuestro objetivo, ha sido el estudio de las relaciones del territorio provincial con el sistema de transporte público regular interurbano de viajeros por carretera y de cómo este sistema ha sido capaz de crear las redes, tanto de ámbito comarcal como provincial, que atienden una parte significativa de la demanda de movilidad de la población. En este proceso, hemos comprobado las interrelaciones del territorio, con las infraestructuras y la demografía, así como de la Administración con sus concesionarios, que, bajo la tutela de una legislación específica, hace que el sector tenga unas características singulares. Las conclusiones resultantes nos presentan a un sector de servicios públicos, que se ha desarrollado a lo largo de más un siglo, mediante el régimen de la concesión administrativa, que atiende a prácticamente todos los municipios de la provincia, sin costes obligados para el erario público. Sin embargo, los cambios normativos derivados de la aplicación de la legislación europea, así como la reciente crisis del COVID-19, generan un período de incertidumbre sobre el futuro del sector, lo que abre la puerta a nuevas líneas de investigación.
88

Effects of Student Self-Management on Generalization of Student Performance to Regular Classes

Peterson, Lloyd Douglas 01 May 1999 (has links)
The use of a student self-monitoring and self-rating/teacher matching strategy to assist generalization of social skills use and decrease off-task behavior of five inner-city at-risk middle school students was investigated. A multiple-baseline design was used to assess the effects of the intervention in up to six different class settings. Results indicated that the self-monitoring and self-rating/teacher matching intervention led to an increase in correct social skills use and a decrease in off-task behaviors with all five students. These data add to the existing literature, suggesting self-monitoring with self-rating/teacher matching is an effective procedure to promote generalization of behavior. Implications for research and practice are discussed.
89

Media proximal y regularización

Carranza Purca, Marlo January 2015 (has links)
En muchas situaciones reales se trata de utilizar determinados recursos en una cantidad limitada pero de la mejor manera, es decir que su uso cause el mayor provecho. La programación lineal estudia la optimización de una función lineal que satisface un conjunto de restricciones lineales de igualdad o desigualdad. La programación lineal es un modelo matemático que fue planteado por primera vez por George B. Dantzing en1947 cuando era consejero matemático de la fuerza aérea de los Estados Unidos. Sabemos además que en1939 Leonid V. Kantorovich ya había planteado y resuelto problemas de este tipo. En aplicaciones de la optimización a la economía, teoría de control, problemas inversos etc, surgen problemas donde la función objetivo no siempre es diferenciable o casos en los cuales el problema no está bien puesto. Para resolver problemas como estos se utilizan técnicas en el contexto del análisis convexo, como los métodos de regularización para funciones convexas así como los métodos de punto proximal y lagrangeano aumentado ente otros. Recientemente, en el año 2009, los profesores Bauschke, Lucet y Triens propusieron la media proximal, una novedosa técnica que tiene la propiedad de ser autodual respecto a la conjugada de Fenchel, que puede trabajar incluso con funciones de dominio disjunto, veremos que esta técnica puede ser aprovechada para manipular la envoltura de Goebel y probar su autodualidad respecto a la conjugada de Fenchel, además de tratar la optimización de varias funciones objetivo en el caso convexo o inclusive en el caso de ciertas funciones no necesariamente convexas aun cuando los dominios de estas funciones sean disjuntos. / Tesis
90

Combinatorial Problems Related to the Representation Theory of the Symmetric Group

Kreighbaum, Kevin M. 19 May 2010 (has links)
No description available.

Page generated in 0.042 seconds