• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Optimal steering for kinematic vehicles with applications to spatially distributed agents

Bakolas, Efstathios 10 November 2011 (has links)
The recent technological advances in the field of autonomous vehicles have resulted in a growing impetus for researchers to improve the current framework of mission planning and execution within both the military and civilian contexts. Many recent efforts towards this direction emphasize the importance of replacing the so-called monolithic paradigm, where a mission is planned, monitored, and controlled by a unique global decision maker, with a network centric paradigm, where the same mission related tasks are performed by networks of interacting decision makers (autonomous vehicles). The interest in applications involving teams of autonomous vehicles is expected to significantly grow in the near future as new paradigms for their use are constantly being proposed for a diverse spectrum of real world applications. One promising approach to extend available techniques for addressing problems involving a single autonomous vehicle to those involving teams of autonomous vehicles is to use the concept of Voronoi diagram as a means for reducing the complexity of the multi-vehicle problem. In particular, the Voronoi diagram provides a spatial partition of the environment the team of vehicles operate in, where each element of this partition is associated with a unique vehicle from the team. The partition induces, in turn, a graph abstraction of the operating space that is in a one-to-one correspondence with the network abstraction of the team of autonomous vehicles; a fact that can provide both conceptual and analytical advantages during mission planning and execution. In this dissertation, we propose the use of a new class of Voronoi-like partitioning schemes with respect to state-dependent proximity (pseudo-) metrics rather than the Euclidean distance or other generalized distance functions, which are typically used in the literature. An important nuance here is that, in contrast to the Euclidean distance, state-dependent metrics can succinctly capture system theoretic features of each vehicle from the team (e.g., vehicle kinematics), as well as the environment-vehicle interactions, which are induced, for example, by local winds/currents. We subsequently illustrate how the proposed concept of state-dependent Voronoi-like partition can induce local control schemes for problems involving networks of spatially distributed autonomous vehicles by examining different application scenarios.
12

Mimetic finite differences for porous media applications

Al-Hinai, Omar A. 07 July 2014 (has links)
We connect the Mimetic Finite Difference method (MFD) with the finite-volume two-point flux scheme (TPFA) for Voronoi meshes. The main effect is reducing the saddle-point system to a much smaller symmetric-positive definite matrix. In addition, the generalization allows MFD to seamlessly integrate with existing porous media modeling technology. The generalization also imparts the monotonicity property of the TPFA method on MFD. The connection is achieved by altering the consistency condition of the velocity bilinear operator. First-order convergence theory is presented as well as numerical results that support the claims. We demonstrate a methodology for using MFD in modeling fluid flow in fractures coupled with a reservoir. The method can be used for nonplanar fractures. We use the method to demonstrate the effects of fracture curvature on single-phase and multi-phase flows. Standard benchmarks are used to demonstrate the accuracy of the method. The approach is coupled with existing reservoir simulation technology. / text
13

Análise dos erros na estimação de gradientes em malhas de Voronoi / Analysis errors in the estimation of gradient in Voronoi meshes

Jailson França dos Santos 18 March 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho apresenta um estudo teórico e numérico sobre os erros que ocorrem nos cálculos de gradientes em malhas não estruturadas constituídas pelo diagrama de Voronoi, malhas estas, formadas também pela triangulação de Delaunay. As malhas adotadas, no trabalho, foram as malhas cartesianas e as malhas triangulares, esta última é gerada pela divisão de um quadrado em dois ou quatro triângulos iguais. Para tal análise, adotamos a escolha de três metodologias distintas para o cálculo dos gradientes: método de Green Gauss, método do Mínimo Resíduo Quadrático e método da Média do Gradiente Projetado Corrigido. O texto se baseia em dois enfoques principais: mostrar que as equações de erros dadas pelos gradientes podem ser semelhantes, porém com sinais opostos, para pontos de cálculos em volumes vizinhos e que a ordem do erro das equações analíticas pode ser melhorada em malhas uniformes quando comparada as não uniformes, nos casos unidimensionais, e quando analisada na face de tais volumes vizinhos nos casos bidimensionais. / This work presents a theoretical and numerical study on the errors that occur in the calculation of gradients on unstructured meshes Voronoi type, these meshes, also formed by Delaunay triangulation. The meshes adopted in the work were cartesian and triangular meshes, the latter is formed by dividing a square in two or four equal triangles. For this analysis, we adopt the choice of three different methodologies for the calculation of gradients: Green Gauss method, weighted least-squares method and mean value of the projected gradients method. The text is based on two main approaches: to show that the equations of errors given by the gradients may be similar, but with opposite signs, for calculation point in opposite volumes. And show that the order of the error of the analytical equations can be improved in uniform mesh when compared to not uniform, the one-dimensional case, and when viewed from the opposite face of such volumes for the two-dimensional case.
14

Fluid distribution optimization in porous media using leaf venation patterns / Otimização da distribuição de fluidos em meios porosos usando padrões de venações de folhas

Caio Martins Ramos de Oliveira 22 March 2017 (has links)
Several examples of nearly optimal transport networks can be found in nature. These networks effectively distribute and drain fluids throughout a medium. Evidence suggests that blood vessels of the circulatory system, airways in the lungs and veins of leaf venations are examples of networks that have evolved to become effective in their tasks while simultaneously being energy efficient. Hence, it does not come as a surprise that recent performance improvements of modern power generating devices occur due to the use of nature-inspired channel architectures. Guided by this observations, in this work, we investigate the application of visually realistic computer-generated leaf venation patterns to a type of photovoltaic device. We solve the flow through the device problem using Computational Fluid Dynamics (CFD) tools. Moreover, we attempt to develop experimentals models. Ultimately, we seek to single out the network properties that affect their performance. / Diversos exemplos de redes de transporte quase ótimas podem ser encontradas na natureza. Essas redes distribuem e coletam fluidos através de um meio. Evidências sugerem que os vasos sanguíneos do sistema circulatório, as vias respiratórias nos pulmões e as veias das venações em folhas são exemplares de redes que evoluiram para se tornarem efetivas em suas tarefas sendo, ao mesmo tempo, eficientes energeticamente. Dessa forma, não chega a ser surpreendente que recentes melhorias de performance em dispositivos de geração de energia modernos ocorrem devido ao uso de arquiteturas de canais inspiradas na natureza. Guiados por estas observações, nesse trabalho, investigamos a aplicação de padrões de venações verossímeis geradas por computador em um tipo de dispositivo fotovoltaico. Resolvemos o problema de escoamento através do dispositivo usando ferramentas de Dinâmica de Fluidos Computacional (CFD). Além disso, procuramos desenvolver modelos experimentais. Em última instância, estamos em busca das propriedades da rede que afetam sua performance.
15

[pt] OTIMIZAÇÃO TOPOLÓGICA COM REFINAMENTO ADAPTATIVO DE MALHAS POLIGONAIS / [en] TOPOLOGY OPTIMIZATION WITH ADAPTIVE POLYGONAL MESH REFINEMENT

THOMÁS YOITI SASAKI HOSHINA 03 November 2016 (has links)
[pt] A otimização topológica tem como objetivo encontrar a distribuição mais eficiente de material (ótima topologia) em uma determinada região, satisfazendo as restrições de projeto estabelecidas pelo usuário. Na abordagem tradicional atribui-se uma variável de projeto, constante, denominada densidade, para cada elemento finito da malha. Dessa forma, a qualidade da representação dos novos contornos da estrutura depende do nível de discretização da malha: quanto maior a quantidade de elementos, mais bem definida será a topologia da estrutura otimizada. No entanto, a utilização de malhas super-refinadas implica em um elevado custo computacional, principalmente na etapa de solução numérica das equações de equilíbrio pelo método dos elementos finitos. Este trabalho propõe uma nova estratégia computacional para o refinamento adaptativo local de malhas utilizando elementos finitos poligonais em domínios bidimensionais arbitrários. A ideia consiste em realizar um refinamento da malha nas regiões de concentração de material, sobretudo nos contornos internos e externos, e um desrefinamento nas regiões de baixa concentração de material, como por exemplo, nos furos internos. Desta forma, é possível obter topologias ótimas, com alta resolução e relativamente baixo custo computacional. Exemplos representativos são apresentados para demonstrar a robustez e a eficiência da metodologia proposta por meio de comparações com resultados obtidos com malhas super-refinadas e mantidas constantes durante todo o processo de otimização topológica. / [en] Topology optimization aims to find the most efficient distribution of material (optimal topology) in a given domain, subjected to design constraints defined by the user. The quality of the new boundary representation depends on the level of mesh refinement: the greater the number of elements in the mesh, the better will be the representation of the optimized structure. However, the use of super refined meshes implies in a high computational cost, especially regarding the numerical solution of the linear systems of equations that arise from the finite element method. This work proposes a new computational strategy for adaptive local mesh refinement using polygonal finite elements in arbitrary two-dimensional domains. The idea is to perform a mesh refinement in regions of material concentration, mostly in inner and outer boundaries, and a mesh derefinement in regions of low material concentration such as the internal holes. Thus, it is possible to obtain optimal topologies with high resolution and relatively low computational cost. Representative examples are presented to demonstrate the robustness and efficiency of the proposed methodology by comparing the results obtained herein with the ones from the literature where super refined meshes are held constant throughout all topology optimization process.
16

Um método para análise e visualização de dados georreferenciados relacionados ao trânsito de veículos

Machado, Jonathan 30 March 2017 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2017-06-13T15:57:01Z No. of bitstreams: 1 Jonathan Machado_.pdf: 1018280 bytes, checksum: ac428b5c72c1ef24649cb96a3a778512 (MD5) / Made available in DSpace on 2017-06-13T15:57:01Z (GMT). No. of bitstreams: 1 Jonathan Machado_.pdf: 1018280 bytes, checksum: ac428b5c72c1ef24649cb96a3a778512 (MD5) Previous issue date: 2017-03-30 / Nenhuma / Os acidentes de trânsito de veículos são uma das maiores causas de mortes na população jovem mundial, e existe uma tendência ao crescimento no número de casos dos mesmos nos próximos anos. A ocorrência dos acidentes é influenciada por diversos fatores, tais como condições das vias, condições climáticas, fiscalização de leis por órgãos governamentais, dentre outros. Seria interessante conhecer de maneira mais detalhada quais destes fatores detém maior influência. Na internet, existe uma quantidade imensa de dados gerados pelos mais diversos órgãos e empresas, porém grande parte desta informação não é analisada por ninguém, seja por falta de acesso, ou porque os dados não estão estruturados de uma maneira que permita seu entendimento. A disponibilização de dados vem aumentando, seja por conta de políticas de dados abertos implantadas pelo governo ou através de ferramentas colaborativas da web, que possibilitam o registro de informações por parte da população, e que posteriormente disponibilizam seus dados. Este trabalho propõe um método de agrupamento de dados georreferenciados oriundos de diversas fontes, para realização de uma análise estatística utilizando a técnica de Análise de Componentes Principais, que poderá identificar de forma georreferenciada quais características influenciam mais na ocorrência de acidentes de trânsito de veículos. Após a análise, é explorada uma nova metodologia de visualização dos resultados, plotados sobre mapas, que podem servir de auxílio para órgãos do governo e tomadores de decisão que realizam ações para diminuir os acidentes de trânsito. / Traffic accidents of vehicles are one of the biggest causes of deaths in the world's young population, and there is a tendency to increase this number in the next years. The occurrence of accidents is influenced by several factors, such as road conditions, climatic conditions, law enforcement by government agencies, among others. It would be interesting to know in more detail which of these factors has the greatest influence. On the internet, there is an immense amount of data generated by diverse agencies and companies, but much of this information is not analyzed, either because of lack of access, or because the data is not structured in a way that allows its understanding. The availability of data is increasing, either through open data policies implemented by the government, or through collaborative web tools, which make it possible record information by population, and subsequently make their data available. This work proposes a method of grouping georeferenced data from several sources, to perform a statistical analysis using the technique of Principal Components Analysis, which can identify in a georeferenced way which characteristics influence more in the occurrence of traffic acidentes of vehicles. After the analysis, a new methodology for visualizing results, plotted on maps, is explored, which can serve as an aid to government agencies and decision makers who take actions to reduce traffic accidents.
17

K-set Polygons and Centroid Triangulations / K-set Polygones et Triangulations Centroïdes

El Oraiby, Wael 09 October 2009 (has links)
Cette thèse est une contribution à un problème classique de la géométrie algorithmique et combinatoire : l’étude des k-sets d’un ensemble V de n points du plan. Nous introduisons d’abord la notion de chaîne d’inclusion de convexes qui est un ordonnancement des points de V tel qu’aucun point n’appartient à l’enveloppe convexe de ceux qui le précèdent. Tout k-set d’une sous-suite commençante de la chaîne est appelé un k-set de la chaîne. Nous montrons que le nombre de ces k-sets est un invariant de V et qu’il est égal au nombre de régions du diagramme de Voronoï d’ordre k de V. Nous en déduisons un algorithme en ligne pour construire les k-sets des sommets d’une ligne polygonale simple dont chaque sommet est à l’extérieur de l’enveloppe convexe des sommets qui le précèdent sur la ligne. Si c est le nombre total de k-sets construits, la complexité de notrealgorithme est en O(n log n+c log^2 k) et est équivalente, par k-set construit, à celle du meilleur algorithme connu. Nous montrons ensuite que la méthode algorithmique classique de division-fusion peut être adaptée à la construction des k-sets de V. L’algorithme qui en résulte a une complexité enO(n log n+c log^2 k log(n/k)), où c est le nombre maximum de k-sets d’un ensemble de n points.Nous prouvons enfin que les centres de gravité des k-sets d’une chaîne d’inclusion de convexes sont les sommets d’une triangulation qui appartient à la même famille de triangulations, dites centroïdes, que le dual du diagramme de Voronoï d’ordre k. Nous en d´déduisons un algorithme qui construit des triangulations centroïdes particulières en temps O(n log n+k(n-k) log^2 k), ce qui est plus efficace que les algorithmes connus jusque là. / This thesis is a contribution to a classical problem in computational and combinatorial geometry: the study of the k-sets of a set V of n points in the plane. First we introduce the notion of convex inclusion chain that is an ordering of the points of V such that no point is inside the convex hull of the points that precede it. Every k-set of an initial sub-sequence of the chain is called a k-set of the chain. We prove that the number of these k-sets is an invariant of V and is equal to the number of regions in the order-k Voronoi diagram of V. We then deduce an online algorithm for the construction of the k-sets of the vertices of a simple polygonal line such that every vertex of this line is outside the convex hull of all its preceding vertices on the line. If c is the total number of k-sets built with this algorithm, the complexity of our algorithm is in O(n log n + c log^2k) and is equal, per constructed k-set, to the complexity of the best algorithm known. Afterward, we prove that the classical divide and conquer algorithmic method can be adapted to the construction of the k-sets of V. The algorithm has a complexity of O(n log n + c log^2k log(n/k)), where c is the maximum number of k-sets of a set of n points. We finally prove that the centers of gravity of the k-sets of a convex inclusion chain are the vertices of a triangulation belonging to the family of so-called centroid triangulations. This family notably contains the dual of the order-k Voronoi diagram. We give an algorithm that builds particular centroid triangulations in O(n log n + k(n- k) log^2 k) time, which is more efficient than all the currently known algorithms.
18

Αλγοριθμικές τεχνικές εντοπισμού και παρακολούθησης πολλαπλών πηγών από ασύρματα δίκτυα αισθητήρων

Αμπελιώτης, Δημήτριος 12 April 2010 (has links)
Οι πρόσφατες εξελίξεις στις ασύρματες επικοινωνίες και στα ηλεκτρονικά κυκλώματα έχουν επιτρέψει την ανάπτυξη υπολογιστικών διατάξεων χαμηλού κόστους και χαμηλής κατανάλωσης ισχύος, οι οποίες ενσωματώνουν δυνατότητες μέτρησης (sensing), επεξεργασίας και ασύρματης επικοινωνίας. Οι διατάξεις αυτές, οι οποίες έχουν ιδιαίτερα μικρό μέγεθος, καλούνται κόμβοι αισθητήρες. Ένα ασύρματο δίκτυο κόμβων αισθητήρων αποτελείται από ένα πλήθος κόμβων οι οποίοι έχουν αναπτυχθεί σε κάποια περιοχή ενδιαφέροντος προκειμένου να μετρούν κάποια μεταβλητή του περιβάλλοντος. Ανάμεσα σε πολλές εφαρμογές, ο εντοπισμός και η παρακολούθηση των θέσεων πηγών οι οποίες εκπέμπουν κάποιο σήμα (π.χ. ακουστικό, ηλεκτρομαγνητικό) αποτελεί ένα πολύ ενδιαφέρον θέμα, το οποίο μάλιστα μπορεί να χρησιμοποιηθεί και ως βάση για τη μελέτη άλλων προβλημάτων τα οποία εμφανίζονται στα ασύρματα δίκτυα αισθητήρων. Οι περισσότερες από τις υπάρχουσες τεχνικές εντοπισμού θέσης μιας πηγής από μια συστοιχία αισθητήρων μπορούν να ταξινομηθούν σε δυο κατηγορίες: (α) Τις τεχνικές οι οποίες χρησιμοποιούν μετρήσεις διεύθυνσης άφιξης (Direction of Arrival, DOA) και (β) τις τεχνικές οι οποίες χρησιμοποιούν μετρήσεις διαφοράς χρόνων άφιξης (Time Difference of Arrival, TDOA). Ωστόσο, οι τεχνικές αυτές απαιτούν υψηλό ρυθμό δειγματοληψίας και ακριβή συγχρονισμό των κόμβων και δε συνάδουν έτσι με τις περιορισμένες ικανότητες των κόμβων αισθητήρων. Για τους λόγους αυτούς, το ενδιαφέρον έχει στραφεί σε μια τρίτη κατηγορία τεχνικών οι οποίες χρησιμοποιούν μετρήσεις ισχύος (Received Signal Strength, RSS). Το πρόβλημα του εντοπισμού θέσης χρησιμοποιώντας μετρήσεις ισχύος είναι ένα πρόβλημα εκτίμησης, όπου οι μετρήσεις συνδέονται με τις προς εκτίμηση παραμέτρους με μη-γραμμικό τρόπο. Στα πλαίσια της Διδακτορικής Διατριβής ασχολούμαστε αρχικά με την περίπτωση όπου επιθυμούμε να εκτιμήσουμε τη θέση και την ισχύ μιας πηγής χρησιμοποιώντας μετρήσεις ισχύος οι οποίες φθίνουν με βάση το αντίστροφο του τετραγώνου της απόστασης ανάμεσα στην πηγή και το σημείο μέτρησης. Για το πρόβλημα αυτό, προτείνουμε έναν εκτιμητή ο οποίος δίνει τις παραμέτρους της πηγής ως λύση ενός γραμμικού προβλήματος ελαχίστων τετραγώνων. Στη συνέχεια, υπολογίζουμε κατάλληλα βάρη και προτείνουμε έναν εκτιμητή ο οποίος δίνει τις παραμέτρους της πηγής ως λύση ενός προβλήματος ελαχίστων τετραγώνων με βάρη. Ακόμα, τροποποιούμε κατάλληλα τον τελευταίο εκτιμητή έτσι ώστε να είναι δυνατή η κατανεμημένη υλοποίησή του μέσω των προσαρμοστικών αλγορίθμων Least Mean Square (LMS) και Recursive Least Squares (RLS). Στη συνέχεια, εξετάζουμε την περίπτωση όπου ενδιαφερόμαστε να εκτιμήσουμε τη θέση μιας πηγής αλλά δεν έχουμε καμιά πληροφορία σχετικά με το μοντέλο εξασθένισης της ισχύος. Έτσι, υποθέτουμε πως αυτό περιγράφεται από μια άγνωστη γνησίως φθίνουσα συνάρτηση της απόστασης. Αρχικά, προσεγγίζουμε το πρόβλημα εκτίμησης κάνοντας την υπόθεση πως οι θέσεις των κόμβων αποτελούν τυχαία σημεία ομοιόμορφα κατανεμημένα στο επίπεδο. Χρησιμοποιώντας την υπόθεση αυτή, υπολογίζουμε εκτιμήσεις για τις αποστάσεις ανάμεσα στους κόμβους και την πηγή, και αναπτύσσουμε έναν αλγόριθμο εκτίμησης της θέσης της πηγής. Στη συνέχεια, προσεγγίζουμε το πρόβλημα εκτίμησης χωρίς την υπόθεση περί ομοιόμορφης κατανομής των θέσεων των κόμβων στο επίπεδο. Προτείνουμε μια κατάλληλη συνάρτηση κόστους για την περίπτωση αυτή, και δείχνουμε την ύπαρξη μιας συνθήκης υπό την οποία η βέλτιστη λύση μπορεί να υπολογιστεί. Η λύση αυτή είναι εσωτερικό σημείο ενός κυρτού πολυγώνου, το οποίο ονομάζουμε ταξινομημένο τάξης-K κελί Voronoi. Έτσι, δίνουμε αλγορίθμους υπολογισμού της λύσης αυτής, καθώς και κατανεμημένους αλγορίθμους οι οποίοι βασίζονται σε προβολές σε κυρτά σύνολα. Ακόμα, ασχολούμαστε με τις ιδιότητες των κελιών αυτών στην περίπτωση όπου οι θέσεις των κόμβων αισθητήρων είναι ομοιόμορφα κατανεμημένες στο επίπεδο και υπολογίζουμε κάποια φράγματα για το εμβαδόν τους. Τέλος, ασχολούμαστε με την περίπτωση όπου ενδιαφερόμαστε να εκτιμήσουμε τις θέσεις πολλαπλών πηγών με γνωστό μοντέλο εξασθένισης της ισχύος. Για το πρόβλημα αυτό, αρχικά προτείνουμε έναν αλγόριθμο διαδοχικής εκτίμησης και ακύρωσης της συνεισφοράς κάθε πηγής, προκειμένου να υπολογιστούν σταδιακά οι θέσεις όλων των πηγών. Ο αλγόριθμος αυτός, αποτελείται από τρία βήματα κατά τα οποία πρώτα υπολογίζεται μια προσεγγιστική θέση για την πηγή, στη συνέχεια εκτιμάται ένα σύνολο κόμβων το οποίο δέχεται μικρής έντασης παρεμβολή από τις υπόλοιπες πηγές, και τέλος επιχειρείται μια λεπτομερέστερη εκτίμηση της θέσης κάθε πηγής. Στη συνέχεια, επεκτείνοντας την τεχνική αυτή, προτείνουμε έναν επαναληπτικό αλγόριθμο εκτίμησης ο οποίος βασίζεται στον αλγόριθμο εναλλασσόμενων προβολών (Alternating Projections). Εξετάζουμε επίσης μεθόδους οι οποίες οδηγούν στη μείωση της υπολογιστικής πολυπλοκότητας του αλγορίθμου αυτού. / Technology advances in microelectronics and wireless communications have enabled the development of small-scale devices that integrate sensing, processing and short-range radio capabilities. The deployment of a large number of such devices, referred to as sensor nodes, over a territory of interest, defines the so-called wireless sensor network. Wireless sensor networks have attracted considerable attention in recent years and have motivated many new challenges, most of which require the synergy of many disciplines, including signal processing, networking and distributed algorithms. Among many other applications, source localization and tracking has been widely viewed as a canonical problem of wireless sensor networks. Furthermore, it constitutes an easily perceived problem that can be used as a vehicle to study more involved information processing and organization problems. Most of the source localization methods that have appeared in the literature can be classified into two broad categories, according to the physical variable they utilize. The algorithms of the first category utilize “time delay of arrival”(TDOA) measurements, and the algorithms of the second category use “direction of arrival” (DOA) measurements. DOA estimates are particularly useful for locating sources emitting narrowband signals, while TDOA measurements offer the increased capability of localizing sources emitting broadband signals. However, the methods of both categories impose two major requirements that render them inappropriate to be used in wireless sensor networks: (a) the analog signals at the outputs of the spatially distributed sensors should be sampled in a synchronized fashion, and (b) the sampling rate used should be high enough so as to capture the features of interest. These requirements, in turn, imply that accurate distributed synchronization methods should be implemented so as to keep the remote sensor nodes synchronized and that high frequency electronics as well as increased bandwidth are needed to transmit the acquired measurements. Due to the aforementioned limitations, source localization methods that rely upon received signal strength (RSS) measurements - originally explored for locating electromagnetic sources - have recently received revived attention. In this Thesis, we begin our study by considering the localization of an isotropic acoustic source using energy measurements from distributed sensors, in the case where the energy decays according to an inverse square law with respect to the distance. While most acoustic source localization algorithms require that distance estimates between the sensors and the source of interest are available, we propose a linear least squares criterion that does not make such an assumption. The new criterion can yield the location of the source and its transmit power in closed form. A weighted least squares cost function is also considered, and distributed implementation of the proposed estimators is studied. Numerical results indicate significant performance improvement as compared to a linear least squares based approach that utilizes energy ratios, and comparable performance to other estimators of higher computational complexity. In the sequel, we turn our attention to the case where the energy decay model is not known. For solving the localization problem in this case, we first make the assumption that the locations of the nodes near the source can be well described by a uniform distribution. Using this assumption, we derive distance estimates that are independent of both the energy decay model and the transmit power of the source. Numerical results show that these estimates lead to improved localization accuracy as compared to other model-independent approaches. In the sequel, we consider the more general case where the assumption about the uniform deployment of the sensors is not required. For this case, an optimization problem that does not require knowledge of the underlying energy decay model is proposed, and a condition under which the optimal solution can be computed is given. This condition employs a new geometric construct, called the sorted order-K Voronoi diagram. We give centralized and distributed algorithms for source localization in this setting. Finally, analytical results and simulations are used to verify the performance of the developed algorithms. The next problem we consider is the estimation of the locations of multiple acoustic sources by a network of distributed energy measuring sensors. The maximum likelihood (ML) solution to this problem is related to the optimization of a non-convex function of, usually, many variables. Thus, search-based methods of high complexity are required in order to yield an accurate solution. In order to reduce the computational complexity of the multiple source localization problem, we propose two methods. The first method proposes a sequential estimation algorithm, in which each source is localized, its contribution is cancelled, and the next source is considered. The second method makes use of an alternating projection (AP) algorithm that decomposes the original problem into a number of simpler, yet also non-convex, optimization steps. The particular form of the derived cost functions of each such optimization step indicates that, in some cases, an approximate form of these cost functions can be used. These approximate cost functions can be evaluated using considerably lower computational complexity. Thus, a low-complexity version of the AP algorithm is proposed. Extensive simulation results demonstrate that the proposed algorithm offers a performance close to that of the exact AP implementation, and in some cases, similar performance to that of the ML estimator.
19

Development of advanced methods for super-resolution microscopy data analysis and segmentation / Développement de méthodes avancées pour l'analyse et la segmentation de données de microscopie à super-résolution

Andronov, Leonid 09 January 2018 (has links)
Parmi les méthodes de super-résolution, la microscopie par localisation de molécules uniques se distingue principalement par sa meilleure résolution réalisable en pratique mais aussi pour l’accès direct aux propriétés des molécules individuelles. Les données principales de la microscopie par localisation sont les coordonnées des fluorochromes, un type de données peu répandu en microscopie conventionnelle. Le développement de méthodes spéciales pour le traitement de ces données est donc nécessaire. J’ai développé les logiciels SharpViSu et ClusterViSu qui permettent d’effectuer les étapes de traitements les plus importantes, notamment une correction des dérives et des aberrations chromatiques, une sélection des événements de localisations, une reconstruction des données dans des images 2D ou dans des volumes 3D par le moyen de différentes techniques de visualisation, une estimation de la résolution à l’aide de la corrélation des anneaux de Fourier, et une segmentation à l’aide de fonctions K et L de Ripley. En plus, j’ai développé une méthode de segmentation de données de localisation en 2D et en 3D basée sur les diagrammes de Voronoï qui permet un clustering de manière automatique grâce à modélisation de bruit par les simulations Monte-Carlo. En utilisant les méthodes avancées de traitement de données, j’ai mis en évidence un clustering de la protéine CENP-A dans les régions centromériques des noyaux cellulaires et des transitions structurales de ces clusters au moment de la déposition de la CENP-A au début de la phase G1 du cycle cellulaire. / Among the super-resolution methods single-molecule localization microscopy (SMLM) is remarkable not only for best practically achievable resolution but also for the direct access to properties of individual molecules. The primary data of SMLM are the coordinates of individual fluorophores, which is a relatively rare data type in fluorescence microscopy. Therefore, specially adapted methods for processing of these data have to be developed. I developed the software SharpViSu and ClusterViSu that allow for most important data processing steps, namely for correction of drift and chromatic aberrations, selection of localization events, reconstruction of data in 2D images or 3D volumes using different visualization techniques, estimation of resolution with Fourier ring correlation, and segmentation using K- and L-Ripley functions. Additionally, I developed a method for segmentation of 2D and 3D localization data based on Voronoi diagrams, which allows for automatic and unambiguous cluster analysis thanks to noise modeling with Monte-Carlo simulations. Using advanced data processing methods, I demonstrated clustering of CENP-A in the centromeric regions of the cell nucleus and structural transitions of these clusters upon the CENP-A deposition in early G1 phase of the cell cycle.
20

[pt] OTIMIZAÇÃO TOPOLÓGICA USANDO MALHAS POLIÉDRICAS / [en] TOPOLOGY OPTIMIZATION USING POLYHEDRAL MESHES

22 February 2019 (has links)
[pt] A otimização topológica tem se desenvolvido bastante e possui potencial para revolucionar diversas áreas da engenharia. Este método pode ser implementado a partir de diferentes abordagens, tendo como base o Método dos Elementos Finitos. Ao se utilizar uma abordagem baseada no elemento, potencialmente, cada elemento finito pode se tornar um vazio ou um sólido, e a cada elemento do domínio é atribuído uma variável de projeto, constante, denominada densidade. Do ponto de vista Euleriano, a topologia obtida é um subconjunto dos elementos iniciais. No entanto, tal abordagem está sujeita a instabilidades numéricas, tais como conexões de um nó e rápidas oscilações de materiais do tipo sólido-vazio (conhecidas como instabilidade de tabuleiro). Projetos indesejáveis podem ser obtidos quando elementos de baixa ordem são utilizados e métodos de regularização e/ou restrição não são aplicados. Malhas poliédricas não estruturadas naturalmente resolvem esses problemas e oferecem maior flexibilidade na discretização de domínios não Cartesianos. Neste trabalho investigamos a otimização topológica em malhas poliédricas por meio de um acoplamento entre malhas. Primeiramente, as malhas poliédricas são geradas com base no conceito de diagramas centroidais de Voronoi e posteriormente otimizadas para uso em análises de elementos finitos. Demonstramos que o número de condicionamento do sistema de equações associado pode ser melhorado ao se minimizar uma função de energia relacionada com a geometria dos elementos. Dada a qualidade da malha e o tamanho do problema, diferentes tipos de resolvedores de sistemas de equações lineares apresentam diferentes desempenhos e, portanto, ambos os resolvedores diretos e iterativos são abordados. Em seguida, os poliedros são decompostos em tetraedros por um algoritmo específico de acoplamento entre as malhas. A discretização em poliedros é responsável pelas variáveis de projeto enquanto a malha tetraédrica, obtida pela subdiscretização da poliédrica, é utilizada nas análises via método dos elementos finitos. A estrutura modular, que separa as rotinas e as variáveis usadas nas análises de deslocamentos das usadas no processo de otimização, tem se mostrado promissora tanto na melhoria da eficiência computacional como na qualidade das soluções que foram obtidas neste trabalho. Os campos de deslocamentos e as variáveis de projeto são relacionados por meio de um mapeamento. A arquitetura computacional proposta oferece uma abordagem genérica para a solução de problemas tridimensionais de otimização topológica usando poliedros, com potencial para ser explorada em outras aplicações que vão além do escopo deste trabalho. Finalmente, são apresentados diversos exemplos que demonstram os recursos e o potencial da abordagem proposta. / [en] Topology optimization has had an impact in various fields and has the potential to revolutionize several areas of engineering. This method can be implemented based on the finite element method, and there are several approaches of choice. When using an element-based approach, every finite element is a potential void or actual material, whereas every element in the domain is assigned to a constant design variable, namely, density. In an Eulerian setting, the obtained topology consists of a subset of initial elements. This approach, however, is subject to numerical instabilities such as one-node connections and rapid oscillations of solid and void material (the so-called checkerboard pattern). Undesirable designs might be obtained when standard low-order elements are used and no further regularization and/or restrictions methods are employed. Unstructured polyhedral meshes naturally address these issues and offer fl exibility in discretizing non-Cartesians domains. In this work we investigate topology optimization on polyhedra meshes through a mesh staggering approach. First, polyhedra meshes are generated based on the concept of centroidal Voronoi diagrams and further optimized for finite element computations. We show that the condition number of the associated system of equations can be improved by minimizing an energy function related to the element s geometry. Given the mesh quality and problem size, different types of solvers provide different performances and thus both direct and iterative solvers are addressed. Second, polyhedrons are decomposed into tetrahedrons by a tailored embedding algorithm. The polyhedra discretization carries the design variable and a tetrahedra subdiscretization is nested within the polyhedra for finite element analysis. The modular framework decouples analysis and optimization routines and variables, which is promising for software enhancement and for achieving high fidelity solutions. Fields such as displacement and design variables are linked through a mapping. The proposed mapping-based framework provides a general approach to solve three-dimensional topology optimization problems using polyhedrons, which has the potential to be explored in applications beyond the scope of the present work. Finally, the capabilities of the framework are evaluated through several examples, which demonstrate the features and potential of the proposed approach.

Page generated in 0.0518 seconds