• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2575
  • 911
  • 381
  • 347
  • 331
  • 101
  • 66
  • 46
  • 40
  • 36
  • 34
  • 32
  • 30
  • 27
  • 26
  • Tagged with
  • 5906
  • 1419
  • 865
  • 716
  • 715
  • 667
  • 487
  • 486
  • 473
  • 440
  • 418
  • 414
  • 386
  • 356
  • 338
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Key-value storage system synchronization in peer-to-peer environments

2014 July 1900 (has links)
Data synchronization is the problem of bringing multiple versions of the same data on different remote devices to the most up to date version. This thesis looks into the particular problem of key-value storage systems synchronization between mobile devices in a peer-to-peer environment. In this research, we describe, implement and evaluate a new key-value storage system synchronization algorithm using a 2-phase approach, combining approximate synchronization in the first phase and exact synchronization in the second phase. The 2-phase architecture helps the algorithm achieve considerable boost in performance in all three major criteria of a data synchronization algorithm, namely synchronization time, processing time and communication cost, while still being suitable to operate in a peer-to-peer environment. The performance increase makes it feasible to employ database synchronization technique in a wider range of mobile applications, especially those operating on a slow peer-to-peer network.
52

Optimizing the management of hemodialysis catheter occlusion

Abdelmoneim, Ahmed S. 09 April 2010 (has links)
Hemodialysis catheter occlusion compromises hemodialysis adequacy and increases the cost of care. Repeated administration of alteplase in hemodialysis catheters typically produces only short-term benefits. The purpose of this study was to design, implement and evaluate the efficacy of a step-by-step algorithm to optimize the management of hemodialysis catheter occlusion. The study had a prospective quasi-experimental design in two parts. Baseline data on the use of alteplase and catheter exchange were collected during Part I; while, Part II consisted of algorithm implementation. Rates of alteplase use and catheter exchange per 1000 catheter days were main outcomes of the study. One-hundred and seventy-two catheters in 131 patients were followed up during the course of the study. The vast majority of the study population were on clopidogrel or aspirin (75%); whereas, approximately 11% were on warfarin. The adjusted rate of alteplase use was not significantly different after algorithm implementation (Part I vs. Part II relative risk: 1.10; 95% CI: 0.73 – 1.65, p > 0.05). Similarly, catheter exchange rates were not significantly different in both parts of the study (1.12 vs. 1.03 per 1000 catheter-days, p > 0.05). Regression analysis showed that the rate of alteplase use was inversely related to the catheter age (p < 0.05). In a secondary analysis on a subgroup of patients with occlusion-related catheter exchanges (n = 28), the number of alteplase administrations significantly increased with longer waiting time for catheter exchange (p < 0.05). In conclusion the hemodialysis catheter management algorithm was not effective in decreasing the rate of alteplase use.
53

Optimization of patients appointments in chemotherapy treatment unit: heuristic and metaheuristic approaches

Shahnawaz, Sanjana 18 September 2012 (has links)
This research aims to improve the performance of the service of a Chemotherapy Treatment Unit by reducing the waiting time of patients within the unit. In order to fulfill the objective, initially, the chemotherapy treatment unit is deduced as an identical parallel machines scheduling problem with unequal release time and single resource. A mathematical model is developed to generate the optimum schedule. Afterwards, a Tabu search (TS) algorithm is developed. The performance of the TS algorithm is evaluated by comparing results with the mathematical model and the best results of benchmark problems reported in the literature. Later on, an additional resource is considered which converted the problem into a dual resources scheduling problem. Three approaches are proposed to solve this problem; namely, heuristics, a Tabu search algorithm with heuristic (TSHu), and Tabu search algorithm for dual resources (TSD).
54

Exact and approximation algorithms for two combinatorial optimization problems

Li, Zhong 06 1900 (has links)
In this thesis, we present our work on two combinatorial optimization problems. The first problem is the Bandpass problem, and we designed a linear time exact algorithm for the 3-column case. The other work is on the Complementary Maximal Strip Recovery problem, for which we designed a 3-approximation algorithm.
55

High performance computing and algorithm development: application of dataset development to algorithm parameterization.

Jonas, Mario Ricardo Edward January 2006 (has links)
A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these technologies, have been created. The ability of these tools to organize the information into biologically meaningful results, however, needs to be stringently tested. The research contained herein focuses on data produced by technology that records short Expressed Sequence Tags (EST's).
56

Fast rational function reconstruction /

Khodadad, Sara. January 1900 (has links)
Thesis (M.Sc.) - Simon Fraser University, 2005. / Thesis (School of Computing Science) / Simon Fraser University.
57

Fast rational function reconstruction /

Khodadad, Sara. January 1900 (has links)
Thesis (M.Sc.) - Simon Fraser University, 2005. / Thesis (School of Computing Science) / Simon Fraser University.
58

Development of a multi-objective variant of the alliance algorithm

Lattarulo, Valerio January 2017 (has links)
Optimization methodologies are particularly relevant nowadays due to the ever-increasing power of computers and the enhancement of mathematical models to better capture reality. These computational methods are used in many different fields and some of them, such as metaheuristics, have often been found helpful and efficient for the resolution of practical applications where finding optimal solutions is not straightforward. Many practical applications are multi-objective optimization problems: there is more than one objective to optimize and the solutions found represent trade-offs between the competing objectives. In the last couple of decades, several metaheuristics approaches have been developed and applied to practical problems and multi-objective versions of the main single-objective approaches were created. The Alliance Algorithm (AA) is a recently developed single-objective optimization algorithm based on the metaphorical idea that several tribes, with certain skills and resource needs, try to conquer an environment for their survival and try to ally together to improve the likelihood of conquest. The AA method has yielded reasonable results in several fields to which it has been applied, thus the development in this thesis of a multi-objective variant to handle a wider range of problems is a natural extension. The first challenge in the development of the Multi-objective Alliance Algorithm (MOAA) was acquiring an understanding of the modifications needed for this generalization. The initial version was followed by other versions with the aim of improving MOAA performance to enable its use in solving real-world problems: the most relevant variations, which led to the final version of the approach, have been presented. The second major contribution in this research was the development and combination of features or the appropriate modification of methodologies from the literature to fit within the MOAA and enhance its potential and performance. An analysis of the features in the final version of the algorithm was performed to better understand and verify their behavior and relevance within the algorithm. The third contribution was the testing of the algorithm on a test-bed of problems. The results were compared with those obtained using well-known baseline algorithms. Moreover, the last version of the MOAA was also applied to a number of real-world problems and the results, compared against those given by baseline approaches, are discussed. Overall, the results have shown that the MOAA is a competitive approach which can be used `out-of-the-box' on problems with different mathematical characteristics and in a wide range of applications. Finally, a summary of the objectives achieved, the current status of the research and the work that can be done in future to further improve the performance of the algorithm is provided.
59

Projeto, análise e implementação de primitivas criptográficas simétricas eficientes usando a estratégia de trilha larga. / Sem título em inglês

Décio Luiz Gazzoni Filho 27 February 2008 (has links)
Estendemos o trabalho de Vincent Rijmen e Joan Daemen na estratégia de trilha larga, uma metodologia de projeto para primitivas criptográficas simétricas eficientes e demonstravelmente resistentes às técnicas de criptanálise diferencial e linear. Preocupamo-nos principalmente com a melhoria na eficiência de primitivas projetadas de acordo com a estratégia de trilha larga. Investigamos duas linhas distintas de pesquisa: a aplicabilidade da técnica de bitslicing à implementação em software de primitivas baseadas na estratégia de trilha larga; e o projeto de S-boxes estruturadas com implementação eficiente em hardware e bitslicing, e especificamente, o uso de S-boxes invariantes por rotação, que exibem propriedades vantajosas para implementação. Também implementamos e otimizamos algumas primitivas criptográficas em plataformas de software selecionadas, para substanciar e aprimorar as afirmações de eficiência da estratégia de trilha larga. Ademais, aplicamos nosso conhecimento e técnicas propostas ao projeto de novas primitivas criptográficas altamente eficientes, em particular a função de hash MAELSTROM-0 e a cifra de bloco legada FUTURE. / We extend the work of Vincent Rijmen and Joan Daemen on the Wide Trail strategy, a design methodology for symmetric-key cryptographic primitives which are efficient and provably secure against differential and linear cryptanalysis. We concern ourselves mainly with improving the efficiency of primitives designed according to the Wide Trail strategy. To that end, we investigate two distinct lines of research: the applicability of the bitslicing technique to the software implementation of primitives based on the Wide Trail strategy; and the design of structured S-boxes with efficient implementation in hardware and bitslicing, and specifically, the use of rotation-symmetric S-boxes, which exhibit advantageous implementation properties. We also perform general implementation and optimization work on selected software platforms, to further realize the claims of efficiency of the Wide Trail strategy. Additionally, we apply our expertise and proposed techniques to the design of new highly-efficient cryptographic primitives, in particular the hash function MAELSTROM-0 and the legacy-level block cipher FUTURE.
60

A mathematical theory of synchronous concurrent algorithms

Thompson, Benjamin Criveli January 1987 (has links)
A synchronous concurrent algorithm is an algorithm that is described as a network of intercommunicating processes or modules whose concurrent actions are synchronised with respect to a global clock. Synchronous algorithms include systolic algorithms; these are algorithms that are well-suited to implementation in VLSI technologies. This thesis provides a mathematical theory for the design and analysis of synchronous algorithms. The theory includes the formal specification of synchronous algorithms; techniques for proving the correctness and performance or time-complexity of synchronous algorithms, and formal accounts of the simulation and top-down design of synchronous algorithms. The theory is based on the observation that a synchronous algorithm can be specified in a natural way as a simultaneous primitive recursive function over an abstract data type; these functions were first studied by J. V. Tucker and J. I. Zucker. The class of functions is described via a formal syntax and semantics, and this leads to the definition of a functional algorithmic notation called PR. A formal account of synchronous algorithms and their behaviour is achieved by showing that synchronous algorithms can be specified in PR. A formal account of the performance of synchronous algorithms is achieved via a mathematical account of the time taken to evaluate a function defined by simultaneous primitive recursion. A synchronous algorithm, when specified in PR, can be transformed into a program in a language called FPIT. FPIT is a language based on abstract data types and on the multiple or concurrent assignment statement. The transformation from PR to FPIT is phrased as a compiler that is proved correct; compiling the PR-representation of a synchronous algorithm thus yields a provably correct simulation of the algorithm. It is proved that FPIT is just what is needed to implement PR by defining a second compiler, this time from FPIT back into PR, which is again proved correct, and thus PR and FPIT are formally computationally equivalent. Furthermore, an autonomous account of the length of computation of FPIT programs is given, and the two compilers are shown to be performance preserving; thus PR and FPIT are computationally equivalent in an especially strong sense. The theory involves a formal account of the top-down design of synchronous algorithms that is phrased in terms of correctness and performance preserving transformations between synchronous algorithms specified at different levels of data abstraction. A new definition of what it means for one abstract data type to be 'implemented' over another is given. This definition generalises the idea of a computable algebra due to A. I. Mal'cev and M. 0. Rabin. It is proved that if one data type D is implementable over another data type D', then there exists correctness and performance preserving compiler mapping high level PR-programs over D to low level PR-programs over D'. The compilers from PR to FPIT and from FPIT to PR are defined explicitly, and our compilerexistence proof is constructive, and so this work is the basis of theoretically well-founded software tools for the design and analysis of synchronous algorithms.

Page generated in 0.0469 seconds