• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2628
  • 942
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 6015
  • 1462
  • 893
  • 731
  • 726
  • 709
  • 498
  • 495
  • 487
  • 455
  • 422
  • 414
  • 386
  • 366
  • 343
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Escalonamento de workflow com anotações de tarefas sensitivas para otimização de segurança e custo em nuvens / Workflow scheduling with sensitive task annotations for security and cost optimization in clouds

Shishido, Henrique Yoshikazu 11 December 2018 (has links)
A evolução dos computadores tem possibilitado a realização de experimentos in-silico, incluindo aplicações baseadas no modelo de workflow. A execução de workflows é uma atividade que pode ser computacional dispendiosa, onde grades e nuvens são adotadas para a sua execução. Inserido nesse contexto, os algoritmos de escalonamento de workflow permitem atender diferentes critérios de execução como o tempo e o custo monetário. Contudo, a segurança é um critério que tem recebido atenção, pois diversas organizações hesitam em implantar suas aplicações em nuvens devido às ameaças presentes em um ambiente aberto e promíscuo como a Internet. Os algoritmos de escalonamento direcionados à segurança consideram dois cenários: (a) nuvens híbridas: mantêm os tarefas que manipulam dados sensitivos/confidenciais na nuvem privada e exporta as demais tarefas para nuvens públicas para satisfazer alguma restrição (ex.: tempo), e; (b) nuvens públicas: considera o uso de serviços de segurança disponíveis em instâncias de máquinas virtuais para proteger tarefas que lidam com dados sensitivos/confidenciais. No entanto, os algoritmos de escalonamento que consideram o uso de serviços de segurança selecionam as tarefas de forma aleatória sem considerar a semântica dos dados. Esse tipo de abordagem pode acabar atribuindo proteção a tarefas não-sensitivas e desperdiçando tempo e recursos, e deixando dados sensitivos sem a proteção necessária. Frente a essas limitações, propõe-se nesta tese duas abordagens de escalonamento de workflow: o Workflow Scheduling - Task Selection Policies (WS-TSP) e a Sensitive Annotation for Security Tasks (SAST). O WS-TSP é uma abordagem de escalonamento que usa um conjunto de políticas para a proteção de tarefas. O SAST é outra abordagem que permite utilizar o conhecimento prévio do Desenvolvedor de Aplicação para identificar quais tarefas devem receber proteção. O WS-TSP e a SAST consideram a aplicação de serviços de segurança como autenticação, verificação de integridade e criptografia para proteger as tarefas sensitivas do workflow. A avaliação dessas abordagens foi realizada através de uma extensão do simulador WorkflowSim que incorpora a sobrecarga dos serviços de segurança no tempo, do custo e do risco de execução do workflow. As duas abordagens apresentaram menor risco de segurança do que as abordagens da literatura, sob um custo e makespan razoáveis. / The evolution of computers has enabled in-silico experiments to take place, including applications based on the workflow model. The execution of workflows is an activity that can be computationally expensive, where grids and clouds are adopted for its execution. In this context, the workflow scheduling algorithms allow meeting different execution criteria such as time and monetary cost. However, security is a criterion that has received attention because several organizations hesitate to deploy their applications in clouds due to the threats present in an open and promiscuous environment like the Internet. Security-oriented scheduling algorithms consider two scenarios: (a) hybrid clouds: holds tasks that manipulate sensitive data in the private cloud and export the other tasks to public clouds to satisfy some constraints (eg, time); (b) public clouds: considers the use of security services available in instances of virtual machines to protect tasks that deal with sensitive data. However, scheduling algorithms that consider the use of security services randomly select tasks without considering data semantics. This type of approach may end up assigning protection to non-sensitive tasks and wasting time and resources and leaving sensitive data without the necessary protection. In view of these limitations, two workflow scheduling approaches are proposed: Workflow Scheduling (WS-TSP) and Sensitive Annotation for Security Tasks (SAST). WS-TSP is a scheduling approach that uses a set of policies for task protection. SAST is another approach that allows using the Application Developers prior knowledge to identify which tasks should be protected. WS-TSP and SAST consider implementing security services such as authentication, integrity verification, and encryption to protect sensitive tasks. The evaluation of these approaches was carried out through an extension of the simulatorWorkflowSim that incorporates the overhead of security services in the execution time, the cost and the risk of execution The two approaches presented a lower security risk than the literature approaches, at a reasonable cost and makespan.
622

A Semi-Automated Algorithm for Segmenting the Hippocampus in Patient and Control Populations

Muncy, Nathan McKay 01 June 2016 (has links)
Calculating hippocampal volume from Magnetic Resonance (MR) images is an essential task in many studies of neurocognition in healthy and diseased populations. The `gold standard' method involves hand tracing, which is accurate but laborious, requiring expertly trained researchers and significant amounts of time. As such, segmenting large datasets with the standard method is impractical. Current automated pipelines are inaccurate at hippocampal demarcation and volumetry. We developed a semi-automated hippocampal segmentation pipeline based on the Advanced Normalization Tools (ANTs) suite of programs to segment the hippocampus. We applied the semi-automated segmentation pipeline to 70 participant scans (26 female) from groups that included participants diagnosed with autism spectrum disorder, healthy older adults (mean age 74) and healthy younger controls. We found that hippocampal segmentations obtained with the semi-automated pipeline more closely matched the segmentations of an expert rater than those obtained using FreeSurfer or the segmentations of novice raters. Further, we found that the pipeline performed best when including manually- placed landmarks and when using a template generated from a heterogeneous sample (that included the full variability of group assignments) than a template generated from more homogeneous samples (using only individuals within a given age or with a specific neuropsychiatric diagnosis). Additionally, the semi-automated pipeline required much less time (5 minutes per brain) than manual segmentation (30-60 minutes per brain) or FreeSurfer (8 hours per brain).
623

A Distributed Algorithm for Optimal Dispatch in Smart Power Grids with Piecewise Linear Cost Functions

Yasmeen, Aneela 01 July 2013 (has links)
We consider the optimal economic dispatch of power generators in a smart electric grid for allocating power between generators to meet load requirements at minimum total cost. We assume that each generator has a piece-wise linear cost function. We first present a polynomial time algorithm that achieves optimal dispatch. We then present a decentralized algorithm where, each generator independently adjusts its power output using only the aggregate power imbalance in the network, which can be observed by each generator through local measurements of the frequency deviation on the grid. The algorithm we propose exponentially erases the power imbalance, while eventually minimizing the generation cost.
624

Sensitivity Analyses for Tumor Growth Models

Mendis, Ruchini Dilinika 01 April 2019 (has links)
This study consists of the sensitivity analysis for two previously developed tumor growth models: Gompertz model and quotient model. The two models are considered in both continuous and discrete time. In continuous time, model parameters are estimated using least-square method, while in discrete time, the partial-sum method is used. Moreover, frequentist and Bayesian methods are used to construct confidence intervals and credible intervals for the model parameters. We apply the Markov Chain Monte Carlo (MCMC) techniques with the Random Walk Metropolis algorithm with Non-informative Prior and the Delayed Rejection Adoptive Metropolis (DRAM) algorithm to construct parameters' posterior distributions and then obtain credible intervals.
625

A.I. - Algorithmic Interactions

Jackson, Delbert Wayne 01 May 2013 (has links)
This thesis will talk about how I created artwork when I started graduate school, how my artwork evolved as I explored what art making meant to me and how my thoughts about art making has developed. I will then conclude with talking about the artwork I produced for my thesis show and how that work was shaped by my previous observations and artworks.
626

Coherence-based transmissibility as a damage indicator for highway bridges

Schallhorn, Charles Joseph 01 December 2015 (has links)
Vibration-based damage detection methods are used in structural applications to identify the global dynamic response of the system. The purpose of the work presented is to exhibit a vibration-based damage detection algorithm that calculates a damage indicator, based on limited frequency bands of the transmissibility function that have high coherence, as a metric for changes in the dynamic integrity of the structure. The methodology was tested using numerical simulation, laboratory experimentation, and field testing with success in detecting, comparatively locating, and relatively quantifying different damages while also parametrically investigating variables which have been identified as issues within similar existing methods. Throughout both the numerical and laboratory analyses, the results were used to successfully detect damage as a result of crack growth or formation of new cracks. Field results using stochastic operational traffic loading have indicated the capability of the proposed methodology in evaluating the changes in the health condition of a section of the bridge and in consistently detecting cracks of various sizes (30 to 60 mm) on a sacrificial specimen integrated with the bridge abutment and a floor beam. Fluctuations in environmental and loading conditions have been known to create some uncertainties in most damage detection processes; however, this work demonstrated that by limiting the features of transmissibility to frequency ranges of high coherence, the effect of these parameters, as compared to the effect of damage, become less significant and can be neglected for some instances. The results of additional field testing using controlled impact forces on the sacrificial specimen have reinforced the findings from the operational loading in detecting damage.
627

Optimization of a Low Reynolds Number 2-D Inflatable Airfoil Section

Johansen, Todd A. 01 December 2011 (has links)
A stand-alone genetic algorithm (GA) and an surrogate-based optimization (SBO) combined with a GA were compared for accuracy and performance. Comparisons took place using the Ackley Function and Rastrigin's Function, two functions with multiple local maxima and minima that could cause problems for more traditional optimization methods, such as a gradient-based method. The GA and SBO with GA were applied to the functions through a fortran interface and it was found that the SBO could use the same number of function evaluations as the GA and achieve at least 5 orders of magnitude greater accuracy through the use of surrogate evaluations. The two optimization methods were used in conjunction with computational fluid dy- namics (CFD) analysis to optimize the shape of a bumpy airfoil section. Results of opti- mization showed that the use of an SBO can save up to 553 hours of CPU time on 196 cores when compared to the GA through the use of surrogate evaluations. Results also show the SBO can achieve greater accuracy than the GA in a shorter amount of time, and the SBO can reduce the negative effects of noise in the simulation data while the GA cannot.
628

The QR Algorithm

Chu, Hsiao-yin Edith 01 May 1979 (has links)
In this section, we will consider two methods for computing an eigenvector and in addition the associated eigenvalue of a matrix A.
629

[en] A STUDY ABOUT THE PERFORMANCE AND THE CONVERGENCE OF GENETIC ALGORITHMS / [pt] UM ESTUDO SOBRE O DESEMPENHO E A CONVERGÊNCIA DE ALGORITMOS GENÉTICOS

RODRIGO MORAES LIMA DE ARAUJO COSTA 07 August 2006 (has links)
[pt] Esta dissertação investiga a convergência e o desempenho de Algoritmos Genéticos: os problemas, soluções e medidas propostas. O trabalho consiste de cinco partes principais: uma discussão sobre os fundamentos matemáticos que buscam explicar o funcionamento de um Algoritmo genético; um estudo dos principais problemas associados à  convergência e ao desempenho de Algoritmos genéticos; uma análise das técnicas e algoritmos alternativos para a melhoria da convergência; um estudo de medidas para estimar o grau de dificuldade esperado para a convergência de Algoritmos Genéticos; e estudo de casos. Os fundamentos matemáticos de Algoritmos Genéticos têm por base os conceitos de schema e blocos construtores, desenvolvidos por Holland (apud Goldberb, 1989a). Embora estes conceitos constituam a teoria fundamental sobre a qual a convergência se baseia, há, no entanto, questões importantes sobre o processo através do qual schemata interagem durante a evolução de um Algoritmo genético (Forrest et al, 1993b). Este trabalho apresenta uma discussão sobre os principais questionamentos que têm sido levantados sobre a validade destes fundamentos. São discutidas as controvérsias geradas pela necessidade de uma visão dinâmica dos Algoritmos Genéticos, onde a amostra da população e os resultados obtidos pela recombinação sejam considerados. Em especial, as objeções apontadas pro Thornton (1995) quanto à  coerência da associação dos conceitos de schema e blocos construtores, a contradição entre os Teoremas schema e Price vista por Altemberg (1994), e as idéias de adequação do Teorema Fundamental de Algoritmos Genéticos ao conceito de variância dentro de uma população. Os principais problemas de convergência e desempenho de um Algoritmo Genético foram discutidos: a Decepção e a Epistasia. É apresentada a idéia de que a Decepção, embora esteja fortemente ligada à  dificuldade de convergência de Algoritmos Genéticos, não constitui fator suficiente para que um problema seja considerado difí­cil para um Algoritmo genético (GA-hard problems) (Grefenstette, 1993). São também apresentados os coeficientes de Walsh (Goldberg, 1989b) e demonstrada a sua relação com as idéias de schema e epistasia, e sua utilização em funções decepcionantes. São analisadas diversas funções decepcionantes. São analisadas diversas funções, associadas aos conceitos de Decepção e Epistasia: as funções fully-deceptive e fully easy com 6 bits, propostas por Deb e Goldberg (1994); as funções deceptive but easy e non-deceptive but hard de Grefenstette (op. Cit.); as funções F2 e F3 de Whitley (1992), e ainda, as funções NK (apud Harvey, 1993) e Royal Road (Forrest et al, op. Cit.) Técnicas alternativas para melhorar a convergência incluem basicamente algoritmos evolucionários com características especí­ficas a determinado tipo de problema. São analisados alguns algoritmos alternativos, como o Messy de Goldberg et alli (1989), o Estruturado de Dasgupta et al (s.d.), o aumentado de Grefenstette (ibidem) e os algoritmos propostos por Paredis (1996b). É ainda discutida e exemplificada a importância da escolha adequada de parâmetros e da representação de cromossomas, para que a convergência seja mais facilmente alcançada. O estudo de medidas de convergêcia de Algoritmos Genéticos fornece uma classificação: medidas probabilísticas e medidas baseadas em landscapes. São apresentadas também as colocações de Koza (1994) e Altemberg (op. Cit.) sobre a convergência de Algoritmos Evolucionários. É dado destaque para medida da dificuldade esperada para convergência baseada no Coeficiente de Correlação entre a Aptidão e a Distância (FDC - Fitness Distance Correlation), como proposto por Jones e Forrest (1995b). O estudo de casos consiste da análise do comportamento de Algoritmos Genéticos pela medida FDC, quando aplicados a um conjunto de funções matemáticas, incluindo as já citadas, e ainda as funções de teste propostas por De Jong (apud Goldberg, op. cit) e a função decepcionante de Liepins e Vose (apud Deb et al, 1994). Também é realizada uma extensão da medida de dificuldade FDC estudada, buscando adequá-la a uma visão mais dinâmica de Algoritmos Genéticos. Para executar estes testes, o ambiente GENEsYs 1.0, desenvolvido por Thomas Bäck (1992) (a partir de seu precursor Genesis de JOhn Grefenstette (apud Ribeiro et alli, 1994), foi adaptado e extendido. / [en] This wok investigates the convergence and the performance of Genetic Algorithms: the problems, solutions and proposed measures. It is divided into five topics: a discussion on the mathematical foundations that explains how Genetic Algorithms work: a study of the most important problems associated to their convergence and performance; an analysis of techniques and alternative Genetic Algorithms to achieve better convergence; a study of measures trying to estimate the level of difficulty for the convergence of GA s; and case study. The mathematical foundations are based in conceps of schema and building blocks, developed by Holland (apud Goldberg, 1989a). Although they constitute the fundamental theory about Genetic Algorithms convergence, there has been a lot of questions about the process in which schemata interact during the evolution of GA s (Forrest et al, 1993b). This work presents a discussion on the most important questions that have been raised about the validity of these foundations. Specifically the objections pointed out by Thorton (1995) about the conference of the association between schema and building blocks; the contradiction between schema theorem and Price theorem, mentioned by Altenberg (1994); and the new ideas raised by the variance of fitness concept. The most important problems related to the convergence and performance of GA s are discussed, i.e. the Deception and the Epistasis. Even though Deception can difficult the convergence, the former does not constitute a sufficient factor for the late (Grefenstette, 1993). The Walsh coefficients (Goldberg, 1989b0 and their relation with schema are presented, and also their utilization in deceptive fuctions. Some functions are analised, based on the concepts of Deception and Epistasis: the 6-bits fully- deceptive function by Deb et all (1994): the 3-bits fully- deceptive functions, by Deb et alli (1989); the functions deceptive but easy and non-deceptive but hard of Grefenstette (op. cit.) the F2 and F3 functions of Whitley (1992) as well as the NK functions (apud Harvey, 1993) and the Royal Road functions (Forrest et al, op. cit.). The techniques included the alternative GA s, with special carachteristics. The Messy GA of Goldberg (1989), the Structured GA of Dasgupta (s.d.), the Augmenated GA of Grefenstette (ibidem) and GA s fo Paredis (1996b). The importance of a correct choice of parameters is also discussed. The study of measures classifies those Ga´s into two types: probabilistics and based on landscapes. The considerations of Koza (1994) and Altenberg (op. cit.) are also discussed. It is given special enfasis to the FDC ( Fitness Distance Correlacion) measure, proposed by Jones and Forrest (1995b). The case study consists of the analysis of the behavior of GA by the measure FDC, applied to a set of mathematical functions. The environment used is GENEsYs 1.0, developed by Thomas Bäck (1992) over the Genesis of Grefenstette. The GENEsys 1.0 was adapted and expanded to fullfil the requirements of this work.
630

Comparison of Routing and Network Coding in Group Communications

Xu, Yangyang 24 March 2009 (has links)
In traditional communication networks, information is delivered as a sequence of packets from source to destination by routing through intermediate nodes which only store and forward those packets. Recent research shows that routing alone is not sufficient to achieve the maximum information transmission rate across a communication network [1]. Network coding is a currently researched topic in information theory that allows the nodes to generate output data by encoding their received data. Thus, nodes may mix the input packets together and send them out as fewer packets. Potential throughput benefit is the initial motivation of the research in network coding. Group communications refers to many-to-many communication sessions where multiple sources multicast independent data to the same group of receivers. Researchers always treat group communications as a simple problem by adding a super source which is connected to all the sources with unbounded capacity links. However, it cannot control the fairness between different sources in this method. Additionally, the method may be incorrect in some scenarios. In this research, we will present an example to illustrate that and analyze the reason for that. The maximum multicast throughput problem using routing only is NP-complete. Wu et al. introduced a greedy tree-packing algorithm based on Prim's algorithm as an alternate sub-optimal solution [2] . This algorithm is modified in this work for group communications problem with routing in undirected networks. The throughput benefit for network coding has been shown in directed networks. However, in undirected networks, researchers have only investigated the multiple unicast sessions problem and one multicast session problem. In most cases, network coding does not seem to yield any throughput benefit [3] [4]. Li et al. introduced a c-flow algorithm using linear programming to find the maximum throughput for one multicast session using network coding [3] . We adapted this algorithm for group communications with network coding in undirected networks to overcome the disadvantage of the traditional method. Both algorithms were simulated using MATLAB and their results were compared. Further, it is demonstrated that network coding does not have constant throughput benefit in undirected networks.

Page generated in 0.0573 seconds