• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Algoritmen som kulturellt redskap : Fyra elevers förståelse av additionsalgoritmen

Bartfai, Sara January 2016 (has links)
The aim of this investigation has been to examine four students, in a second grade class in Stockholm, understanding of the addition algorithm. A small field study has been carried out including both interviews and classroom studies. Vygotsky’s socio-cultural theory and more specifically the concepts of mediation and cultural tools have been applied. Vygotsky asserts that our contact with the world is mediated by cultural tools. The addition algorithm is in this thesis seen as a cultural tool that the students are to appropriate. The results show a variation of the student’s understanding of the addition algorithm. Most importantly it shows that it is possible for students to “say more than they know” with the use of the algorithm. It is difficult to see how much a student really understand of a mathematical concept and easier to see if they do not understand it or are using it in an inappropriate way. Therefore it is necessary for teachers to form a dialogue with the students and ask them why they do as they do while using different mathematical concepts, such as addition algorithms, to acquire a perception of their mathematical understanding.
602

Algorithmic correspondence and completeness in modal logic

Conradie, Willem Ernst 06 March 2008 (has links)
Abstract This thesis takes an algorithmic perspective on the correspondence between modal and hybrid logics on the one hand, and first-order logic on the other. The canonicity of formulae, and by implication the completeness of logics, is simultaneously treated. Modal formulae define second-order conditions on frames which, in some cases, are equiv- alently reducible to first-order conditions. Modal formulae for which the latter is possible are called elementary. As is well known, it is algorithmically undecidable whether a given modal formula defines a first-order frame condition or not. Hence, any attempt at delineating the class of elementary modal formulae by means of a decidable criterium can only consti- tute an approximation of this class. Syntactically specified such approximations include the classes of Sahlqvist and inductive formulae. The approximations we consider take the form of algorithms. We develop an algorithm called SQEMA, which computes first-order frame equivalents for modal formulae, by first transforming them into pure formulae in a reversive hybrid language. It is shown that this algorithm subsumes the classes of Sahlqvist and inductive formulae, and that all formulae on which it succeeds are d-persistent (canonical), and hence axiomatize complete normal modal logics. SQEMA is extended to polyadic languages, and it is shown that this extension succeeds on all polyadic inductive formulae. The canonicity result is also transferred. SQEMA is next extended to hybrid languages. Persistence results with respect to discrete general frames are obtained for certain of these extensions. The notion of persistence with respect to strongly descriptive general frames is investigated, and some syntactic sufficient conditions for such persistence are obtained. SQEMA is adapted to guarantee the persistence with respect to strongly descriptive frames of the hybrid formulae on which it succeeds, and hence the completeness of the hybrid logics axiomatized with these formulae. New syntactic classes of elementary and canonical hybrid formulae are obtained. Semantic extensions of SQEMA are obtained by replacing the syntactic criterium of nega- tive/positive polarity, used to determine the applicability of a certain transformation rule, by its semantic correlate—monotonicity. In order to guarantee the canonicity of the formulae on which the thus extended algorithm succeeds, syntactically correct equivalents for monotone formulae are needed. Different version of Lyndon’s monotonicity theorem, which guarantee the existence of these equivalents, are proved. Constructive versions of these theorems are also obtained by means of techniques based on bisimulation quantifiers. Via the standard second-order translation, the modal elementarity problem can be at- tacked with any second-order quantifier elimination algorithm. Our treatment of this ap- proach takes the form of a study of the DLS-algorithm. We partially characterize the for- mulae on which DLS succeeds in terms of syntactic criteria. It is shown that DLS succeeds in reducing all Sahlqvist and inductive formulae, and that all modal formulae in a single propositional variable on which it succeeds are canonical.
603

Advances in genetic algorithm optimization of traffic signals

Kesur, Khewal Bhupendra 29 May 2008 (has links)
Recent advances in the optimization of fixed time traffic signals have demonstrated a move towards the use of genetic algorithm optimization with traffic network performance evaluated via stochastic microscopic simulation models. This dissertation examines methods for improved optimization. Several modified versions of the genetic algorithm and alternative genetic operators were evaluated on test networks. A traffic simulation model was developed for assessment purposes. Application of the CHC search algorithm with real crossover and mutation operators were found to offer improved optimization efficiency over the standard genetic algorithm with binary genetic operators. Computing resources are best utilized by using a single replication of the traffic simulation model with common random numbers for fitness evaluations. Combining the improvements, delay reductions between 13%-32% were obtained over the standard approaches. A coding scheme allowing for complete optimization of signal phasing is proposed and a statistical model for comparing genetic algorithm optimization efficiency on stochastic functions is also introduced. Alternative delay measurements, amendments to genetic operators and modifications to the CHC algorithm are also suggested.
604

Pollard's rho method

Bucic, Ida January 2019 (has links)
In this work we are going to investigate a factorization method that was invented by John Pollard. It makes possible to factorize medium large integers into a product of prime numbers. We will run a C++ program and test how do different parameters affect the results. There will be a connection drawn between the Pollard's rho method, the Birthday paradox and the Floyd's cycle finding algorithm. In results we will find a polynomial function that has the best effectiveness and performance for Pollard's rho method.
605

Um algoritmo evolutivo para o problema de dimensionamento de lotes em fundições de mercado / An evolutionary algorithm to the lot-sizing in market foundries

Camargo, Victor Claudio Bento de 16 March 2009 (has links)
Segundo uma pesquisa recente realizada junto ao setor de fundições, uma importante preocupação do setor é melhorar seu planejamento de produção. Um plano de produção em uma fundição envolve duas etapas interdependentes: a determinação das ligas a serem fundidas e dos lotes que serão produzidos. Neste trabalho, estudamos o problema de dimensionamento de lotes para fundições de pequeno porte, cujo objetivo é determinar um plano de produção de mínimo custo. Como sugerido na literatura, a heurística proposta trata as etapas do problema de forma hierárquica: inicialmente são definidas as ligas e, posteriormente, os lotes que são produzidos a partir delas. Para a solução do problema, propomos um algoritmo genético que explora um conjunto de possibilidades para a determinação das ligas e utiliza uma heurística baseada em relaxação lagrangiana para determinação dos itens a serem produzidos. Além disso, uma abordagem para o mesmo problema é proposta utilizando o problema da mochila para determinar os itens a serem produzidos. Bons resultados foram obtidos pelos métodos propostos / According to a recent research made by the foundry sector, one of the most concern of the industry is to improve its production planning. A foundry production plan involves two independent stages: the determination of alloys to be merged and the lots that will be produced. In this work, we studied the lot-sizing problem for small foundries, whose purpose is to determine a plan of minimum production cost. As suggested in the literature, the heuristic proposed addresses the problem stages in a hierarchical way: rst we dene the alloys and, subsequently, the lots that are produced from them. We propose a genetic algorithm that explores some possible sets of alloys produced and uses a Lagrangian heuristic to determine the items to be produced. Also, we propose one approach to the same problem that uses the knapsack problem to determine the items to be produced. Good results were obtained by the methods proposed
606

Estudo e aplicação do algoritmo FDK para a reconstrução de imagens tomográficas multi-cortes / Study and application of the FDK algorithm for multi-slice tomographic images reconstruction

Araujo, Ericky Caldas de Almeida 29 October 2008 (has links)
O presente projeto consistiu no estudo e aplicação do algoritmo FDK (Feldkamp-Davis-Kress) para a reconstrução de imagens tomográficas utilizando a geometria de feixe cônico, resultando na implementação de um sistema adaptado de tomografia computadorizada multicortes (TCMC). Para a aquisição das projeções, utilizou-se uma plataforma giratória com goniômetro acoplado, um equipamento de raios X e um detector digital, tipo CCD. Para processar a reconstrução das imagens, foi utilizado um PC, no qual foi implementado o algoritmo FDK. Inicialmente foi aplicado o algoritmo FDK original, no qual se assume o caso físico ideal no processo de medições. Em seguida, foram incorporadas ao algoritmo, algumas correções de artefatos relacionados ao processo de medição das projeções. Para testar o funcionamento do algoritmo implementado, foram feitas reconstruções a partir de projeções simuladas computacionalmente. Foram montados e testados três sistemas de aquisição de projeções, nos quais foram usados diferentes equipamentos de raios X, detectores, metodologias e técnicas radiográficas, a fim de garantir que fossem coletados os melhores conjuntos possíveis de projeções. Realizou-se a calibração do sistema de TCMC implementado. Para isso, utilizou-se um objeto com uma distribuição de coeficientes de atenuação linear conhecida, que foi projetado e fabricado especificamente para isto. Por fim, o sistema de TCMC implementado foi utilizado na reconstrução tomográfica de um objeto não homogêneo, cuja distribuição volumétrica do coeficiente de atenuação linear é desconhecida. As imagens reconstruídas foram analisadas a fim de avaliar o desempenho do sistema de TCMC implementado. / This work consisted on the study and application of the FDK (Feldkamp-Davis-Kress) algorithm for tomographic image reconstruction using cone-beam geometry, resulting on the implementation of an adapted multi-slice computed tomography (MSCT) system. For the acquisition of the projections, a rotating platform coupled to a goniometer, an x-ray equipment and a CCD type digital detector were used. The FDK algorithm was implemented on a PC which was used for the reconstruction process. Initially, the original FDK algorithm was applied considering only the ideal physical conditions in the measurement process. Then some artifacts corrections related to the projections measurement process were incorporated. Computational simulations were performed to test the functioning of the implemented algorithm. Three projections acquisition systems, which used different x-ray equipments, detectors, methodologies and radiographic techniques, were assembled and tested in order to ensure that the best possible set of data was collected. The implemented MSCT system was calibrated. A specially designed and manufactured object with a known linear attenuation coefficient distribution was used for this purpose. Finally, the implemented MSCT system was used for multi-slice tomographic reconstruction of an inhomogeneous object, whose attenuation coefficient distribution was unknown. The reconstructed images were analyzed to assess the performance of the TCMC system that was implemented.
607

Decision support using Bayesian networks for clinical decision making

Ogunsanya, Oluwole Victor January 2012 (has links)
This thesis investigates the use of Bayesian Networks (BNs), augmented by the Dynamic Discretization Algorithm, to model a variety of clinical problems. In particular, the thesis demonstrates four novel applications of BN and dynamic discretization to clinical problems. Firstly, it demonstrates the flexibility of the Dynamic Discretization Algorithm in modeling existing medical knowledge using appropriate statistical distributions. Many practical applications of BNs use the relative frequency approach while translating existing medical knowledge to a prior distribution in a BN model. This approach does not capture the full uncertainty surrounding the prior knowledge. Secondly, it demonstrates a novel use of the multinomial BN formulation in learning parameters of categorical variables. The traditional approach requires fixed number of parameters during the learning process but this framework allows an analyst to generate a multinomial BN model based on the number of parameters required. Thirdly, it presents a novel application of the multinomial BN formulation and dynamic discretization to learning causal relations between variables. The idea is to consider competing causal relations between variables as hypotheses and use data to identify the best hypothesis. The result shows that BN models can provide an alternative to the conventional causal learning techniques. The fourth novel application is the use of Hierarchical Bayesian Network (HBN) models, augmented by dynamic discretization technique, to meta-analysis of clinical data. The result shows that BN models can provide an alternative to classical meta analysis techniques. The thesis presents two clinical case studies to demonstrate these novel applications of BN models. The first case study uses data from a multi-disciplinary team at the Royal London hospital to demonstrate the flexibility of the multinomial BN framework in learning parameters of a clinical model. The second case study demonstrates the use of BN and dynamic discretization to solving decision problem. In summary, the combination of the Junction Tree Algorithm and Dynamic Discretization Algorithm provide a unified modeling framework for solving interesting clinical problems.
608

Boundary conditions in Abelian sandpiles

Gamlin, Samuel January 2016 (has links)
The focus of this thesis is to investigate the impact of the boundary conditions on configurations in the Abelian sandpile model. We have two main results to present in this thesis. Firstly we give a family of continuous, measure preserving, almost one-to-one mappings from the wired spanning forest to recurrent sandpiles. In the special case of $Z^d$, $d \geq 2$, we show how these bijections yield a power law upper bound on the rate of convergence to the sandpile measure along any exhaustion of $Z^d$. Secondly we consider the Abelian sandpile on ladder graphs. For the ladder sandpile measure, $\nu$, a recurrent configuration on the boundary, I, and a cylinder event, E, we provide an upper bound for $\nu(E|I) − \nu(E)$.
609

Outlier Detection In Big Data

Cao, Lei 29 March 2016 (has links)
The dissertation focuses on scaling outlier detection to work both on huge static as well as on dynamic streaming datasets. Outliers are patterns in the data that do not conform to the expected behavior. Outlier detection techniques are broadly applied in applications ranging from credit fraud prevention, network intrusion detection to stock investment tactical planning. For such mission critical applications, a timely response often is of paramount importance. Yet the processing of outlier detection requests is of high algorithmic complexity and resource consuming. In this dissertation we investigate the challenges of detecting outliers in big data -- in particular caused by the high velocity of streaming data, the big volume of static data and the large cardinality of the input parameter space for tuning outlier mining algorithms. Effective optimization techniques are proposed to assure the responsiveness of outlier detection in big data. In this dissertation we first propose a novel optimization framework called LEAP to continuously detect outliers over data streams. The continuous discovery of outliers is critical for a large range of online applications that monitor high volume continuously evolving streaming data. LEAP encompasses two general optimization principles that utilize the rarity of the outliers and the temporal priority relationships among stream data points. Leveraging these two principles LEAP not only is able to continuously deliver outliers with respect to a set of popular outlier models, but also provides near real-time support for processing powerful outlier analytics workloads composed of large numbers of outlier mining requests with various parameter settings. Second, we develop a distributed approach to efficiently detect outliers over massive-scale static data sets. In this big data era, as the volume of the data advances to new levels, the power of distributed compute clusters must be employed to detect outliers in a short turnaround time. In this research, our approach optimizes key factors determining the efficiency of distributed data analytics, namely, communication costs and load balancing. In particular we prove the traditional frequency-based load balancing assumption is not effective. We thus design a novel cost-driven data partitioning strategy that achieves load balancing. Furthermore, we abandon the traditional one detection algorithm for all compute nodes approach and instead propose a novel multi-tactic methodology which adaptively selects the most appropriate algorithm for each node based on the characteristics of the data partition assigned to it. Third, traditional outlier detection systems process each individual outlier detection request instantiated with a particular parameter setting one at a time. This is not only prohibitively time-consuming for large datasets, but also tedious for analysts as they explore the data to hone in on the most appropriate parameter setting or on the desired results. We thus design an interactive outlier exploration paradigm that is not only able to answer traditional outlier detection requests in near real-time, but also offers innovative outlier analytics tools to assist analysts to quickly extract, interpret and understand the outliers of interest. Our experimental studies including performance evaluation and user studies conducted on real world datasets including stock, sensor, moving object, and Geolocation datasets confirm both the effectiveness and efficiency of the proposed approaches.
610

Criptografia com caos / Cryptography with chaos

Formolo, Daniel 25 March 2009 (has links)
Made available in DSpace on 2015-03-05T14:01:20Z (GMT). No. of bitstreams: 0 Previous issue date: 25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O avanço tecnológico e o valor que as informações representam no mundo moderno conduzem a uma constante busca por cifradores mais velozes, seguros e adaptáveis à diversos ambientes. Enquanto a segurança se baseia no comportamento complicado dos algoritmos, a eficiência se baseia na simplicidade das operações matemáticas envolvidas. Os sistemas caóticos têm a propriedade de exibir comportamento complicado mesmo sendo definido de modo determinístico por funções algebricamente simples. Portanto, a proposta de algoritmos de cifração baseados em sistemas caóticos é um tema de ativa pesquisa. Este trabalho estuda a eficiência e a segurança de um dos mais completos cifradores caóticos. Com base nesse estudo, melhorias são propostas tanto quanto à sua eficiência quanto à sua segurança. Com isto, chega-se a uma versão otimizada, a qual é referida ao longo deste trabalho como cifrador caótico (CC). O cifrador resultante não possui as fraquezas dos algoritmos caóticos propostos até então. Além disso, comparações exper / The technological advance and value of information in the preset days lead to a constant search for more efficient, secure and adaptable ciphers. While de security is based on the complicated behavior of the algorithms, its efficiency is based on the simplicity of the mathematical operations involved. The chaotic systems are characterized by the property of showing complex behavior even being defined by simple algebraic mathematical expressions. Therefore, the proposal of ciphers based on chaotic dynamics has became an active research field. This work deals with the efficiency and security of one of the most complete chaotic cipher proposed so far. Based on this study, improvements are proposed for both efficiency and security. The final result is an optimized version, which is denominated chaotic cipher (CC) towards this work. This cipher does not suffer from the weaknesses of the chaotic ciphers proposed so far. Moreover, experimental comparisons shows that the CC performance is comparable with those of s

Page generated in 0.0722 seconds