• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Solving cardinality constrained portfolio optimisation problem using genetic algorithms and ant colony optimisation

Li, Yibo January 2015 (has links)
In this thesis we consider solution approaches for the index tacking problem, in which we aim to reproduces the performance of a market index without purchasing all of the stocks that constitute the index. We solve the problem using three different solution approaches: Mixed Integer Programming (MIP), Genetic Algorithms (GAs), and Ant-colony Optimization (ACO) Algorithm by limiting the number of stocks that can be held. Each index is also assigned with different cardinalities to examine the change to the solution values. All of the solution approaches are tested by considering eight market indices. The smallest data set only consists of 31 stocks whereas the largest data set includes over 2000 stocks. The computational results from the MIP are used as the benchmark to measure the performance of the other solution approaches. The Computational results are presented for different solution approaches and conclusions are given. Finally, we implement post analysis and investigate the best tracking portfolios achieved from the three solution approaches. We summarise the findings of the investigation, and in turn, we further improve some of the algorithms. As the formulations of these problems are mixed-integer linear programs, we use the solver ‘Cplex’ to solve the problems. All of the programming is coded in AMPL.
592

Computational modelling of vascular interventions : endovascular device deployment

Spranger, Katerina January 2014 (has links)
Minimally invasive vascular interventions with stent deployment have become a popular alternative to conventional open surgery in the treatment of many vascular disorders. However, the high initial success rates of endovascular repairs have been overshadowed by reported complications that cause re-interventions and, in the worst case, morbidity and mortality. The dangerous complications could be mitigated by better choice of device design and by the appropriate positioning of the implant inside the vessel. However, there is currently no possibility for the interventionist to predict the resulting position and the expanded shape of the device for a given patient, before the actual procedure, within the clinical setting. Motivated by this unmet clinical need and the lack of suitable methods, this thesis develops a methodology for modelling virtual deployment of implantable devices inside patient vessels, that features fast computational execution times and can be used in clinical practice. This novel deployment method was developed based on a spring-mass model and was tested in different deployment scenarios, expanding stents inside vessels in the order of seconds. Further, the performance of the novel method was optimised by calibrating a set of parameters with the help of a genetic algorithm, which utilises the outcomes of a finite element analysis as a learning reference. After the calibration, the developed stenting method demonstrated acceptable accuracy as compared to the "gold standard" of the finite element simulation. Finally, on a real patient case, 4 alternative stenting scenarios were investigated by comparing the subsequent blood flow conditions, via computational haemodynamics. The obtained results suggested that device design, dimensions, stiffness and positioning have important implications on the post-procedural haemodynamics of the vessel. Ultimately, the presented results can play a transformative role in aiding clinical decision-making and also give rise to overall improvements in implant design and deployment procedure.
593

Algoritmus pro automatizovanou kartografickou generalizaci shluků budov metodou agregace / Algorithm for automated building simplification using aggregation

Svobodová, Jana January 2012 (has links)
Algorithm for automated building simplification using aggregation Abstract Diploma thesis deals with automated cartographic generalization. The main aim is to propose a new generalization algorithm for building aggregation. The first part brings summary of existing algorithms for building aggre- gation. Then the new algorithm is presented: at first, auxiliary data structu- res and algorithms are presented, then cartographic and geometric require- ments are defined. New algorithm is based on the principle of straight skeleton construction. Outer vertices are removed from constructed straight skeletons and those structures are aggregated. The aggregated polygon is reconstructed from ag- gregated structures. The second part is focused on implementation and results evaluation. The algorithm is implemented using open-source libraries CGAL, Boost and Shapelib. The results and confrontation with SW ArcGIS are discus- sed in conclusion of the thesis. 1
594

Skládání obdélníků / Packing rectangles

Pavlík, Tomáš January 2016 (has links)
This thesis studies the open problem of packing rectangles. Is it possible to pack rectangles with dimensions 1/n x 1/(n+1) into a unit square? The aim of this thesis is analysis of the problem and the related algorithm. Attention will be focused mainly on the implementation of this algorithm and on study of its functioning. Powered by TCPDF (www.tcpdf.org)
595

Security-driven Design Optimization of Mixed Cryptographic Implementations in Distributed, Reconfigurable, and Heterogeneous Embedded Systems

Nam, HyunSuk, Nam, HyunSuk January 2017 (has links)
Distributed heterogeneous embedded systems are increasingly prevalent in numerous applications, including automotive, avionics, smart and connected cities, Internet of Things, etc. With pervasive network access within these systems, security is a critical design concern. This dissertation presents a modeling and optimization framework for distributed, reconfigurable, and heterogeneous embedded systems. Distributed embedded systems consist of numerous interconnected embedded devices, each composed of different computing resources, such single core processors, asymmetric multicore processors, field-programmable gate arrays (FPGAs), and various combinations thereof. A dataflow-based modeling framework for streaming applications integrates models for computational latency, mixed cryptographic implementations for inter-task and intra task communication, security levels, communication latency, and power consumption. For the security model, we present a level-based modeling of cryptographic algorithms using mixed cryptographic implementations, including both symmetric and asymmetric implementations. We utilize a multi-objective genetic optimization algorithm to optimize security and energy consumption subject to latency and minimum security level constraints. The presented methodology is evaluated using a video-based object detection and tracking application and several synthetic benchmarks representing various application types. Experimental results for these design and optimization frameworks demonstrate the benefits of mixed cryptographic algorithm security model compared to single cryptographic algorithm alternatives. We further consider several distributed heterogeneous embedded systems architectures.
596

Data Visualization of Telenor mobility data

Virinchi, Billa January 2017 (has links)
Nowadays with the rapid development of cities, understanding the human mobility patterns of subscribers is crucial for urban planning and for network infrastructure deployment. Today mobile phones are electronic devices used for analyzing the mobility patterns of the subscribers in the network, because humans in their daily activities they carry mobile phones for communication purpose. For effective utilization of network infrastructure (NI) there is a need to study on mobility patterns of subscribers.   The aim of the thesis is to simulate the geospatial Telenor mobility data (i.e. three different subscriber categorized segments) and provide a visual support in google maps using google maps API, which helps in decision making to the telecommunication operators for effective utilization of network infrastructure (NI).    In this thesis there are two major objectives. Firstly, categorize the given geospatial telenor mobility data using subscriber mobility algorithm. Secondly, providing a visual support for the obtained categorized geospatial telenor mobility data in google maps using a geovisualization simulation tool.    The algorithm used to categorize the given geospatial telenor mobility data is subscriber mobility algorithm. Where this subscriber mobility algorithm categorizes the subscribers into three different segments (i.e. infrastructure stressing, medium, friendly). For validation and confirmation purpose of subscriber mobility algorithm a tetris optimization model is used. To give visual support for each categorized segments a simulation tool is developed and it displays the visualization results in google maps using Google Maps API.   The result of this thesis are presented to the above formulated objectives. By using subscriber mobility algorithm and tetris optimization model to a geospatial data set of 33,045 subscribers only 1400 subscribers are found as infrastructure stressing subscribers. To look informative, a small region (i.e. boras region) is taken to visualize the subscribers from each of the categorized segments (i.e. infrastructure stressing, medium, friendly).    The conclusion of the thesis is that the functionality thus developed contributes to knowledge discovery from geospatial data and provides visual support for decision making to telecommunication operators. Nowadays with the rapid development of cities, understanding the human mobility patterns of subscribers is crucial for urban planning and for network infrastructure deployment. Today mobile phones are electronic devices used for analyzing the mobility patterns of the subscribers in the network, because humans in their daily activities they carry mobile phones for communication purpose. For effective utilization of network infrastructure (NI) there is a need to study on mobility patterns of subscribers.   The aim of the thesis is to simulate the geospatial Telenor mobility data (i.e. three different subscriber categorized segments) and provide a visual support in google maps using google maps API, which helps in decision making to the telecommunication operators for effective utilization of network infrastructure (NI).    In this thesis there are two major objectives. Firstly, categorize the given geospatial telenor mobility data using subscriber mobility algorithm. Secondly, providing a visual support for the obtained categorized geospatial telenor mobility data in google maps using a geovisualization simulation tool.    The algorithm used to categorize the given geospatial telenor mobility data is subscriber mobility algorithm. Where this subscriber mobility algorithm categorizes the subscribers into three different segments (i.e. infrastructure stressing, medium, friendly). For validation and confirmation purpose of subscriber mobility algorithm a tetris optimization model is used. To give visual support for each categorized segments a simulation tool is developed and it displays the visualization results in google maps using Google Maps API.   The result of this thesis are presented to the above formulated objectives. By using subscriber mobility algorithm and tetris optimization model to a geospatial data set of 33,045 subscribers only 1400 subscribers are found as infrastructure stressing subscribers. To look informative, a small region (i.e. boras region) is taken to visualize the subscribers from each of the categorized segments (i.e. infrastructure stressing, medium, friendly).    The conclusion of the thesis is that the functionality thus developed contributes to knowledge discovery from geospatial data and provides visual support for decision making to telecommunication operators.
597

Semiparametric mixture models

Xiang, Sijia January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Weixin Yao / This dissertation consists of three parts that are related to semiparametric mixture models. In Part I, we construct the minimum profile Hellinger distance (MPHD) estimator for a class of semiparametric mixture models where one component has known distribution with possibly unknown parameters while the other component density and the mixing proportion are unknown. Such semiparametric mixture models have been often used in biology and the sequential clustering algorithm. In Part II, we propose a new class of semiparametric mixture of regression models, where the mixing proportions and variances are constants, but the component regression functions are smooth functions of a covariate. A one-step backfitting estimate and two EM-type algorithms have been proposed to achieve the optimal convergence rate for both the global parameters and nonparametric regression functions. We derive the asymptotic property of the proposed estimates and show that both proposed EM-type algorithms preserve the asymptotic ascent property. In Part III, we apply the idea of single-index model to the mixture of regression models and propose three new classes of models: the mixture of single-index models (MSIM), the mixture of regression models with varying single-index proportions (MRSIP), and the mixture of regression models with varying single-index proportions and variances (MRSIPV). Backfitting estimates and the corresponding algorithms have been proposed for the new models to achieve the optimal convergence rate for both the parameters and the nonparametric functions. We show that the nonparametric functions can be estimated as if the parameters were known and the parameters can be estimated with the same rate of convergence, n[subscript](-1/2), that is achieved in a parametric model.
598

An Evolutionary Method for Complementary Cell Suppression

Ditrich, Eric 01 January 2010 (has links)
As privacy concerns become more important, effective and efficient security techniques will become critical to those that are charged with the protection of sensitive information. Agencies that disseminate numerical data encounter a common disclosure control problem called the complementary cell suppression problem. In this problem, cell values that are considered sensitive in the statistical table must be suppressed before the table is made public. However, suppressing only these cells may not provide adequate protection since their values may be inferred using available marginal subtotals. In order to ensure that the values of the sensitive cells cannot be estimated within a specified degree of precision additional non-sensitive cells, called complementary cells, must also be suppressed. Since suppression of non-sensitive cells diminishes the utility of the released data, the objective in the complementary cell suppression problem is to minimize the information lost due to complementary suppression while guaranteeing that the sensitive cells are adequately protected. The resulting constrained optimization problem is known to be NP-hard and has been a major focus of research in statistical data security. Several heuristic methods have been developed to find good solutions for the complementary cell suppression problem. More recently, genetic algorithms have been used to improve upon these solutions. A problem with these GA-based approaches is that a vast majority of the solutions produced do not protect the sensitive cells. This is because the genetic operators used do not maintain the associations between cells that provide the protection. Consequently, the GA has to include an additional procedure for repairing the solutions. This dissertation details an improved GA-based method for the complementary cell suppression problem that addresses this limitation by designing more effective genetic operators. Specifically, it mitigated the problem of chromosomal repair by developing a crossover operator that maintains the necessary associations. The study also designed an improved mutation operator that exploits domain knowledge to increase the probability of finding good quality solutions. The proposed GA was evaluated by comparing it to extant methods based on the quality of its evolved solutions and its computational efficiency.
599

An evolutionary algorithm for the constrained forest problem

Queern, John John 01 January 2013 (has links)
Given an undirected edge-weighted graph G and a positive integer m, the Constrained Forest Problem (CFP) seeks a lowest-cost (or minimum-weight) forest F which spans G while satisfying the requirement that each tree in F contain at least m vertices. This problem has been shown to be NP-hard for values of m greater than three, giving rise to a number of approximation strategies for finding reasonable m-forest solutions. This research presents a new genetic algorithm (GA) which can consistently find equal-or-better solutions to the problem when compared to non-genetic alternatives. This GA is unique in that it uses chromosomes which are actual candidate solutions (m-forests) and performs genetic operations (random creation, selection, recombination, and mutation) on these candidate solutions. Experiments were run using 180 different GA configurations on 50 benchmark graphs to determine which operators and techniques would be most successful in solving the m-forest problem. The "heaviest edge first" or HEF algorithm run against the minimum spanning tree (MST) of a graph was used as a performance metric. Previously, the HEF(MST) algorithm had been shown to produce the best results on m-forest problems. When the GA was able to find better results than HEF(MST) on the same problem instance, this was considered a GA success. Since the GA's initial population included heuristic candidate solutions such as HEF(MST), the GA never did worse than the best of these. GA solution quality did vary, however, often finding several different better-than-HEF(MST) solutions, illustrating the stochastic nature of the process. Based on data collected from the 9000 initial problem instances, several factors were shown to significantly improve the quality of the GA solution. Problem instances which did not include mutation had a much lower success rate than those which did. Adding calculated heuristic solutions such as HEF(MST) to the initial population allowed the GA to converge more quickly and improved its likelihood of finding better-than-HEF(MST) solutions. Building an initial population using randomly-generated candidate solutions whose edges were restricted to the problem graph's MST proved equally successful. GA configuration options were analyzed using all 9000 test cases and again using only those 403 cases in which the GA was able to find the very best solution for each graph. These analyses were consistent, and resulted in the identification of a single "best" GA configuration which combined the best overall initial population strategy, random seeding algorithms, mutation and crossover strategy. The selected configuration was then further tested using various values of m to ensure that the resulting GA could in fact find better-than-HEF(MST) solutions for the majority of problem instances.
600

Approximate replication of high-breakdown robust regression techniques

Zeileis, Achim, Kleiber, Christian January 2008 (has links) (PDF)
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed. / Series: Research Report Series / Department of Statistics and Mathematics

Page generated in 0.0785 seconds