• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 58
  • 58
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Two-Dimensional Convex Hull Algorithm Using the Iterative Orthant Scan / Tvådimensionell Convex Hull Algoritm Med Iterativ Orthant Scan

Freimanis, Davis, Hongisto, Jonas January 2017 (has links)
Finding the minimal convex polygon on a set of points in space is of great interest in computer graphics and data science. Furthermore, doing this efficiently is financially preferable. This thesis explores how a novel variant of bucketing, called the Iterative orthant scan, can be applied to this problem. An algorithm implementing the Iterative orthant scan was created and implemented in C++ and its real-time and asymptotic performance was compared to the industry standard algorithms along with the traditional textbook convex hull algorithm. The worst-case time complexity was shown to be a logarithmic term worse than the best teoretical algorithm. The real-time performance was 47.3% better than the traditional algorithm and 66.7% worse than the industry standard for a uniform square distribution. The real-time performance was 61.1% better than the traditional algorithm and 73.4% worse than the industry standard for a uniform triangular distribution. For a circular distribution, the algorithm performed 89.6% worse than the traditional algorithm and 98.7% worse than the industry standard algorithm. The asymptotic performance improved for some of the distributions studied. Parallelization of the algorithm led to an average performance improvement of 35.9% across the tested distributions. Although the created algorithm exhibited worse than the industry standard algorithm, the algorithm performed better than the traditional algorithm in most cases and shows promise for parallelization. / Att hitta den minsta konvexa polygonen för en mängds punkter är av intresse i ett flertal områden inom datateknik. Att hitta denna polygon effektivt är av ekonomiskt intresse. Denna rapport utforskar hur metoden Iterative orthant scan kan appliceras på problemet att hitta detta konvexa hölje. En algoritm implementerades i C++ som utnyttjar denna metod och dess prestanda jämfördes i realtid såsom asymptotiskt mot den traditionella och den mest använda algoritmen. Den nya algoritmens asymptotiska värsta fall visades vara ett logaritmiskt gradtal sämre än den bästa teoretiska algoritmen. Realtidsprestandan av den nya algoritmen var 47,3% bättre än den traditionella algoritmen och 66,7% sämre än den mest använda algoritmen för fyrkantsdistribution av punkterna. Realtidsprestandan av den nya algoritmen var 61,1% bättre än den traditionella algoritmen och 73,4% sämre än den mest använda algoritmen för triangulär distribution av punkterna. För cirkulär distribution var vår algoritm 89,6% sämre än den traditionella algoritmen och 98,7% sämre än den vanligaste algoritmen. Parallellisering av vår algoritm ledde i medel till en förbättring på 35,9%. För vissa typer av fördelningar blev denna prestanda bättre. Även då algoritmen hade sämre prestanda än den vanligaste algoritmen, så är det ett lovande steg att prestandan var bättre än den traditionella algoritmen.
32

ON ALGORITHMS FOR THE COLOURFUL LINEAR PROGRAMMING FEASIBILITY PROBLEM

Rong, Guohong 10 1900 (has links)
<p>Given colourful sets S_1,..., S_{d+1} of points in R^d and a point p in R^d, the colourful linear programming problem is to express p as a convex combination of points x_1,...,x_{d+1} with x_i in S_i for each i. This problem was presented by Bárány and Onn in 1997, it is still not known if a polynomial-time algorithm for the problem exists. The monochrome version of this problem, expressing p as a convex combination of points in a set S, is a traditional linear programming feasibility problem. The colourful Carathéodory Theorem, due to Bárány in 1982, provides a sufficient condition for the existence of a colourful set of points containing p in its convex hull. Bárány's result was generalized by Holmsen et al. in 2008 and by Arocha et al. in 2009 before being recently further generalized by Meunier and Deza. We study algorithms for colourful linear programming under the conditions of Bárány and their generalizations. In particular, we implement the Meunier-Deza algorithm and enhance previously used random case generators. Computational benchmarking and a performance analysis including a comparison between the two algorithms of Bárány and Onn and the one of Meunier and Deza, and random picking are presented.</p> / Master of Science (MSc)
33

Différentes propriétés de marches aléatoires avec contraintes géométriques et dynamiques / Different properties of random walks under geometric and dynamic constraints

Chupeau, Marie 05 July 2016 (has links)
Nous déterminons d’abord l’impact d’un plan infini réfléchissant sur l’espace occupé par une marche brownienne bidimensionnelle à un temps fixé, que nous caractérisons par le périmètre moyen de son enveloppe convexe (plus petit polygone convexe contenant toute la trajectoire). Nous déterminons également la longueur moyenne de la portion du plan visitée par le marcheur, et la probabilité de survie d’un marcheur brownien dans un secteur angulaire absorbant.Nous étudions ensuite le temps mis par un marcheur sur réseau pour visiter tous les sites d’un volume, ou une partie d’entre eux. Nous calculons la moyenne de ce temps, dit de couverture, à une dimension pour une marche aléatoire persistante. Nous déterminons également la distribution du temps de couverture et d’autres observables assimilées pour la classe des processus non compacts, qui décrivent un large spectre de recherches aléatoires.Dans un troisième temps, nous calculons et analysons la probabilité de sortie conditionnelle d’un marcheur brownien évoluant dans un intervalle se dilatant ou se contractant à vitesse constante.Enfin, nous étudions plusieurs aspects du modèle du marcheur aléatoire “affamé”, qui meurt si les visites de nouveaux sites, grâce auxquelles il engrange des ressources, ne sont pas suffisamment regulières. Nous en proposons un traitement de type champ moyen à deux dimensions, puis nous déterminons l’impact de la régénération des ressources sur les propriétés de survie du marcheur. Nous considérons finalement un modèle d’exploitation de parcelles de nourriture prenant explicitement en compte le mouvement du marcheur, qui se ramène de manière naturelle au modèle du marcheur aléatoire affamé. / We first determine the impact of an infinite reflecting wall on the space occupied by a planar Brownian motion at a fixed observation time. We characterize it by the mean perimeter of its convex hull, defined as the minimal convex polygon enclosing the whole trajectory. We also determine the mean length of the visited portion of the wall, and the survival probability of a Brownian walker in an absorbing wedge.We then study the time needed for a lattice random walker to visit every site of a confined volume, or a fraction of them. We calculate the mean value of this so-called cover time in one dimension for a persistant random walk. We also determine the distribution of the cover time and related observables for the class of non compact processes, which describes a wide range of random searches.After that, we calculate and analyze the splitting probability of a one-dimensional Brownian walker evolving in an expanding or contracting interval.Last, we study several aspects of the model of starving random walk, where the walker starves if its visits to new sites, from which it collects resources, are not regular enough. We develop a mean-field treatment of this model in two dimensions, then determine the impact of regeneration of resources on the survival properties of the walker. We finally consider a model of exploitation of food patches taking explicitly into account the displacement of the walker in the patches, which can be mapped onto the starving random walk model.
34

Kvantitativní slabá kompaktnost / Quantitative weak compactness

Rolínek, Michal January 2012 (has links)
In this thesis we study quantitative weak compactness in spaces (C(K), τp) and later in Banach spaces. In the first chapter we introduce several quantities, which in different manners measure τp-noncompactness of a given uniformly bounded set H ⊂ RK . We apply the results in Banach spaces in chapter 2, where we prove (among others) a quantitative version of the Eberlein-Smulyan theorem. In the third chapter we focus on convex closures and how they affect measures of noncompactness. We prove a quantitative version of the Krein-Smulyan theorem. The first three chapters show that measuring noncompactness is intimately related to measuring distances from function spaces. We follow this idea in chapters 4 and 5, where we measure distances from Baire one functions first in RK and later also in Banach spaces. 1
35

Anomalous Diffusion in Ecology

Lukovic, Mirko 06 February 2014 (has links)
No description available.
36

Teoremas de semiespaço para superfícies mínimas

Silva, Sylvia Ferreira da 20 March 2017 (has links)
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-09-01T13:15:28Z No. of bitstreams: 1 arquivototal.pdf: 612605 bytes, checksum: 21376fa219dbfadac44b0c5d02d91cd3 (MD5) / Approved for entry into archive by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2017-09-01T15:55:26Z (GMT) No. of bitstreams: 1 arquivototal.pdf: 612605 bytes, checksum: 21376fa219dbfadac44b0c5d02d91cd3 (MD5) / Made available in DSpace on 2017-09-01T15:55:26Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 612605 bytes, checksum: 21376fa219dbfadac44b0c5d02d91cd3 (MD5) Previous issue date: 2017-03-20 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / In this work we detail the results submitted by Ho man and Meeks in \The strong half-space theorem for minimal surfaces". The rst results are half-space theorems for minimal surfaces in R3 which have been generalized for other ambients, as have been done by Daniel, B./ Hauswirth, L., e Daniel, B./ Meeks, W. H. III. The third and last one result, caracterize convex hull in n- dimensional Euclidean spaces. / Neste trabalho detalhamos os resultados apresentados por William H. Meeks e David A. Ho man em \The strong half-space theorem for minimal surfaces", . Os primeiros resultados s~ao teoremas de semiespa co para superf cies m nimas no R3, os quais tem sido generalizados para outros ambientes como foi feito por Daniel, B./ Hauswirth, L., e Daniel, B./ Meeks, W. H. III. O terceiro e ultimo resultado, caracteriza fechos convexos no espa co euclidiano n-dimensional.
37

Wavelet-based segmentation and convex hull approaches for quantitative analysis of biological imaging data

Tzu-Ching Wu (7819853) 14 January 2021 (has links)
<p>Imaging-based analysis of developmental processes are crucial to understand the mechanisms controlling plant and animal development. In vertebrate embryos such as the zebrafish embryo, nuclei segmentation plays an important role to detect and quantify nuclei over space and time. However, limitations of the image quality and segmentation methods may affect the segmentation performance. In plant including studies on Arabidopsis epidermis growth, cellular shape change dictates organ size control and growth behavior, and quantitative image analysis of dynamics cell patterning is needed to link the cause and effect between cells and organs. Here we provide a series of new quantitative biological imaging methods a series of new quantitative biological imaging methods and tools including wavelet-based segmentation method in zebrafish embryo development studies and convex hull approach for quantitative shape analyses of lobed plant cells.</p> <p> </p> <p>Identification of individual cells in tissues, organs, and in various developing systems is a well-studied problem because it is an essential part of objectively analyzing quantitative images in numerous biological contexts. In this paper we present a size dependent wavelet-based segmentation method that provides robust segmentation without any preprocessing, filtering or fine-tuning steps, and is robust to the signal-to-noise ratio (SNR). The program separates overlapping nuclei, identifies cell cycle states and minimizes intensity attenuation in object identification. The wavelet-based methods presented herein achieves robust segmentation results with respect to True Positive rate, Precision, and segmentation accuracy compared with other commonly used methods. We applied the segmentation program to Zebrafish embryonic development IN TOTO quantification and developed an automatic interactive imaging analysis platform named WaveletSEG, that integrates nuclei segmentation, image registration, and nuclei shape analysis. A set of additional functions we developed include a 3D ground truth annotation tool, a synthetic image generator, a segmented training datasets export tool, and data visualization interfaces are also incorporated in WaveletSEG for additional data analysis and data validation. </p> <p> </p> <p>In addition to our work in Zebrafish, we developed image analysis tools for quantitative studies of cell-to-organ in plants. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. Currently, it is still unclear how and when patterns of lobing are initiated in pavement cells, and one major technological bottleneck to address the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We develop a convex-hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output for further analysis. The algorithm is validated against manually curated cell images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process.</p>
38

Development of virtual reality tools for arthroscopic surgery training

Yaacoub, Fadi 12 November 2008 (has links) (PDF)
La chirurgie arthroscopique présente actuellement un essor très important pour le bénéfice du plus grand nombre des patients. Cependant, cette technique possède un certain nombre d'inconvénients et il est donc nécessaire pour le médecin de s'entrainer et répéter ses gestes afin de pouvoir exécuter ce type d'opération d'une façon efficace et certaine. En effet, les méthodes traditionnelles d'enseignement de la chirurgie sont basées sur l'autopsie des cadavres et l'entrainement sur des animaux. Avec l'évolution de notre société, ces deux pratiques deviennent de plus en plus critiquées et font l'objet de réglementations très restrictives. Afin d'atteindre un niveau plus élevé, de nouveaux moyens d'apprentissage sont nécessaires pour les chirurgiens. Récemment, la réalité virtuelle commence d'être de plus en plus utilisée dans la médecine et surtout la chirurgie. Les simulateurs chirurgicaux sont devenus une des matières les plus récentes dans la recherche de la réalité virtuelle. Ils sont également devenus une méthode de formation et un outil d'entrainement valable pour les chirurgiens aussi bien que les étudiants en médecine. Dans ce travail, un simulateur de réalité virtuelle pour l'enseignement de la chirurgie arthroscopique, surtout la chirurgie du poignet, a été préesenté. Deux questions principales sont abordées : la reconstruction et l'interaction 3-D. Une séquence d'images CT a été traitée afin de générer un modèle 3-D du poignet. Les deux principales composantes de l'interface du système sont illustrées : l'interaction 3-D pour guider les instruments chirurgicaux et l'interface de l'utilisateur pour le retour d'effort. Dans ce contexte, les algorithmes qui modélisent les objets en utilisant les approches de "Convex Hull" et qui simulent la détection de collision entre les objets virtuels en temps réel, sont présentés. En outre, un dispositif de retour d'effort est utilisé comme une interface haptique avec le système. Cela conduit au développement d'un système à faible coût, avec les mêmes avantages que les appareils professionnels. A cet égard, l'arthroscopie du poignet peut être simulée et les étudiants en médecine peuvent facilement utiliser le système et peuvent apprendre les compétences de base requises en sécurité, flexibilité et moindre coût
39

Introduction et analyse des schémas de cotation en avance de phase / Introduction and analysis of the tolerancing schemes, during the first design stages.

Socoliuc, Michel 09 July 2010 (has links)
Il y a peu, j’ai pu lire « qu’on pouvait considérer que les ponts romains de l’Antiquité, pouvaient être considérés comme inefficaces, au regard des standards actuels : ils utilisaient trop de pierre et énormément de travail était nécessaire à leur construction. Au fil des années, pour répondre à une problématique équivalente, nous avons appris à utiliser moins de matériaux et à réduire la charge de travail ». Ces problématiques nous les retrouvons aussi en conception mécanique où l’on essaye en continu de proposer des systèmes de plus en plus performants mais devant être conçus en moins de temps, étant moins cher à produire et fournissant des prestations au moins équivalentes à ce qui a déjà été conçu.Au cours d'un processus de conception classique, les concepteurs définissent une géométrie ne présentant aucun défaut puis, étant donné que les moyens de production ne permettent pas d’obtenir de telles pièces finales, ils spécifient les schémas de cotation définissant les écarts acceptables garantissant le bon fonctionnement du système. Seulement, cela est fait après avoir produit les dessins détaillés, c'est à dire trop tard. Pour répondre à cette problématique, je présenterai l’intégration, très tôt dans le cycle de vie de conception, d’un processus de validation optimisé, basé sur une maquette numérique directement en lien avec sa représentation fonctionnelle (maquette fonctionnelle), et permettant de valider des schémas de cotation 3D standardisés.Je décrirai d'abord ce que l’on entend par « maquette fonctionnelle » et surtout ce que cette nouvelle définition apporte en plus de la définition numérique. Une fois ce point abordé, je détaillerai les liens qui permettent d’avoir une unicité de l’information au sein de l’environnement de travail, tout comme les processus qui permettent de lier les représentations fonctionnelles et numériques.Ensuite, je détaillerai les processus basés sur ces concepts, et qui ont pour but de valider les choix qui sont effectués en avance de phase au niveau des schémas de cotation. Pour ce faire, je commencerai par présenter l’analyse au pire des cas (utilisant les modèles de domaines écarts notamment), permettant de garantir le bon fonctionnement de l’ensemble mécanique, dans le cas ou touts les écarts se retrouvent à l’intérieur des zones respectives (définies par les tolérances).Enfin, je finirai par introduire ce qu’une couche statistique, couplée à l’analyse au pire des cas utilisant les enveloppes convexes, peut amener dans le contexte industriel et notamment sous la contrainte temporelle. / Some time ago, I read &quot;According to our current standards, we could consider Roman bridges of ancient times as ineffective: they used too much stone and hard work during construction. Over the years, in order to respond to similar problems, we learned how to use fewer materials and reduce the workload. These issues can also be found in the mechanical design field, where we continuously try to offer more efficient systems, but which have to be designed in less time, be cheaper to produce and provide benefits at least equivalent to what has already been designed.During a conventional design process, designers define the ideal geometries and - given that the machining tools cannot produce mechanical parts without any geometrical defects - specify the associated tolerancing schemes. These tolerancing schemes define acceptable geometrical deviations, thus providing a well-functioning system. Unfortunately this is done after having designed detailed parts and thus, too late.In order to address this problem, I will begin by introducing the integration, in the first design stages, of a new optimized validation process based on a Digital Mock-Up, directly linked to its functional representation (Functional Mock-Up), in order to validate 3D standardized tolerancing schemes. I'll first describe what is meant by &quot;Functional Mock-Up&quot; (FMU) and specify which information is added to the Digital Mock-Up (DMU). Once that is done, I will detail the relationship that leads to the uniqueness of the information and the processes linking the Functional and Digital representations.Then, I'll detail the processes based on these concepts, which aim to validate the tolerancing schemes, during the early design stages. To do this, I'll begin by introducing the worst case analysis (using the deviation domain model), which ensures the proper functioning of the mechanical system. Finally, I will end this by introducing the benefits that can be brought, by coupling a statistical layer to the worst case analysis (using the convex hull).
40

Processo iterativo de construção da função de custo futuro na metodologia PDE-ConvexHull

Brandi, Rafael Bruno da Silva 30 March 2011 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-07-20T13:53:44Z No. of bitstreams: 1 rafaelbrunodasilvabrandi.pdf: 3504861 bytes, checksum: 82d36b1bf645c59e92876390b55e996b (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-07-22T15:19:11Z (GMT) No. of bitstreams: 1 rafaelbrunodasilvabrandi.pdf: 3504861 bytes, checksum: 82d36b1bf645c59e92876390b55e996b (MD5) / Made available in DSpace on 2016-07-22T15:19:11Z (GMT). No. of bitstreams: 1 rafaelbrunodasilvabrandi.pdf: 3504861 bytes, checksum: 82d36b1bf645c59e92876390b55e996b (MD5) Previous issue date: 2011-03-30 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O Sistema Elétrico Brasileiro (SEB) apresenta características peculiares devido às grandes dimensões do país e pelo fato da geração elétrica ser proveniente predominantemente de usinashidráulicasqueproporcionamaosistemaacapacidadedeumaregularizaçãoplurianualdos seusreservatórios. Asafluênciasnestasusinassãoestocásticasemuitasvezesapresentamcomportamentos complementares entre as diversas regiões do país, o que incentiva a existência de grandes intercâmbios energéticos entre os subsistemas através do Sistema Interligado Nacional (SIN). O planejamento da operação no horizonte de médio/longo prazo no país (que compreende a um período de 5 a 10 anos à frente com discretização mensal) é realizado por uma cadeia de modelos computacionais validados pelos principais agentes que atuam no SEB (comercialização, geração, transmissão e distribuição de energia). O principal modelo desta cadeia, a qual é desenvolvida pelo Centro de Pesquisas de Energia Elétrica/ELETROBRÁS, é o modelo NEWAVE que baseia-se na técnica de Programação Dinâmica Dual Estocástica (PDDE) para a determinação da política eletroenergética de médio prazo. O objetivo deste trabalho é implementar um modelo computacional para o planejamento da operação de médio prazo utilizando a metodologia de Programação Dinâmica Estocástica conjuntamente ao algoritmo de fechos convexos (PDE-ConvexHull) de uma forma computacionalmente eficiente (Fast-PDE-ConvexHull). Isto porque observou-se que quando utiliza-se a técnicadaPDE-ConvexHull,umnúmeroelevadodehiperplanossãoobtidosnacomposiçãodas funçõesdecustofuturoe,comisto,osdiversosproblemasdeprogramaçãolinearaseremresolvidos durante o processo iterativo podem tornar-se maiores, aumentando consideravelmente o tempodaexecuçãodocálculodapolíticaoperativa. Sendoassim,aprincipalcontribuiçãodeste trabalho é apresentar uma nova metodologia para a representação da função de custo futuro no problema de programação linear na qual o tempo computacional se torne menos sensível ao númerodehiperplanosobtidospeloalgoritmodefechosconvexos. Ressalta-sequetambémsão utilizadas técnicas de computação paralela com o objetivo de tornar o processo mais eficiente. A metodologia foi utilizada para o cálculo do planejamento de médio prazo do SEB, baseando-se em subsistemas equivalentes de energia. A metodologia Fast-PDE-ConvexHull foi incorporada a uma plataforma computacional, desenvolvida em C++/Java, capaz de considerar o mesmo conjunto de dados utilizado pelos modelos oficiais do SEB, compondo assim um modelo robusto para a resolução do problema. Primeiramente, para fins de validação da metodologia proposta, comparou-se os resultados obtidos pela metodologia da Fast-PDE-ConvexHull com os resultados obtidos pela utilização da técnica da PDE-ConvexHull com o objetivo verificar o ganho computacional e a aderência dos resultados. Por fim, como a plataforma computacional desenvolvida é capaz de utilizar o conjunto de dados oficiais disponibilizados para o SIN, fez-se o uso do Programa Mensal de Operação (PMO) de janeiro de 2011, disponibilizado pelo Operador Nacional do Sistema (ONS), como caso de estudo para comparação dos resultados obtidos pela metodologia proposta com os resultados obtidos pelo modelo NEWAVE. / The Brazilian National Grid (BNG) presents peculiar characteristics due to the huge territory dimensions and by the fact that the electricity generation is predominantly originated from hydraulic plants that provide for the system the capacity of a pluriannual regularization of the reservoirs. The water inflows to these plants are stochastic and often present complementary behavior among the regions of the country, stimulating the existence of big amounts of energy exchanges between the subsystems through the national grid. The long term operation planning problem (that includes a period of 5 to 10 years ahead with monthly discretization) is made by a chain of computational models that are validated by the main agents that act on BNG (commercialization, generation, transmition and distribution of energy). The primary model of this chain, which is developed by Electric Energy Research Center/ELETROBRÁS, is the NEWAVE model, which is based on the Stochastic Dual Dynamic Programming (SDDP) for electroenergetic policy determination on a long term horizon. Thisworkhastheobjectiveofimplementacomputationalmodelforthemid/longtermoperation planning using the Stochastic Dynamic Programming (SDP) together with the Convex Hull algorithm (PDE-ConvexHull) in a computationally efficient way (Fast-PDE-ConvexHull). This is because it was observed that when utilizing the PDE-ConvexHull technique, an elevated amount of hyperplanes are obtained for the composition of the cost-to-go function. So, the different linear programming problems to be solved during the iterative process can be turned larger, increasing the execution time for the operational policy calculus in a considerably manner. Thus, the main contribution of this work is to present a new methodology (FastPDE-ConvexHull) for the representation of the cost-to-go function on the linear programming problems where the computational time become less sensible to the number of hyperplanes obtained from the Convex Hull algorithm. It is highlighted that techniques of parallel computing was employed in order to turn the process more efficient. The methodology was utilized for the BNG’s long term planning calculus, based on the equivalent subsystems of energy. The methodology Fast-PDE-ConvexHull was incorporated to a computational platform, developed in C++/Java programming language, that is able to consider the same data set used by the official models acting on the BNG, compounding a robust model for the resolution of the problem. Firstly, in order to validate the proposed methodology, the results obtained from the FastPDE-ConvexHullarecomparedwiththoseobtainedfromtheutilizationofthePDE-ConvexHull technique aiming to verify the computational gain and the adherence between both results. Finally, as the elaborated computational platform is capable to use the official data set availablefortheNG,itwaspossibletheutilizationoftheMonthlyOperationalProgram(MOP) of January 2011, released by the Independent System Operator (ISO), as the study case for comparingtheresultsobtainedbytheproposedmethodologywiththeresultsobtainedfromthe NEWAVE model.

Page generated in 0.4628 seconds