• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 17
  • 9
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 157
  • 50
  • 29
  • 25
  • 24
  • 22
  • 18
  • 18
  • 16
  • 16
  • 15
  • 13
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Signal reconstruction from incomplete and misplaced measurements

Sastry, Challa, Hennenfent, Gilles, Herrmann, Felix J. January 2007 (has links)
Constrained by practical and economical considerations, one often uses seismic data with missing traces. The use of such data results in image artifacts and poor spatial resolution. Sometimes due to practical limitations, measurements may be available on a perturbed grid, instead of on the designated grid. Due to algorithmic requirements, when such measurements are viewed as those on the designated grid, the recovery procedures may result in additional artifacts. This paper interpolates incomplete data onto regular grid via the Fourier domain, using a recently developed greedy algorithm. The basic objective is to study experimentally as to what could be the size of the perturbation in measurement coordinates that allows for the measurements on the perturbed grid to be considered as on the designated grid for faithful recovery. Our experimental work shows that for compressible signals, a uniformly distributed perturbation can be offset with slightly more number of measurements.
92

3-D Scene Reconstruction from Multiple Photometric Images

Forne, Christopher Jes January 2007 (has links)
This thesis deals with the problem of three dimensional scene reconstruction from multiple camera images. This is a well established problem in computer vision and has been significantly researched. In recent years some excellent results have been achieved, however existing algorithms often fall short of many biological systems in terms of robustness and generality. The aim of this research was to develop improved algorithms for reconstructing 3D scenes, with a focus on accurate system modelling and correctly dealing with occlusions. With scene reconstruction the objective is to infer scene parameters describing the 3D structure of the scene from the data given by camera images. This is an illposed inverse problem, where an exact solution cannot be guaranteed. The use of a statistical approach to deal with the scene reconstruction problem is introduced and the differences between maximum a priori (MAP) and minimum mean square estimate (MMSE) considered. It is discussed how traditional stereo matching can be performed using a volumetric scene model. An improved model describing the relationship between the camera data and a discrete model of the scene is presented. This highlights some of the common causes of modelling errors, enabling them to be dealt with objectively. The problems posed by occlusions are considered. Using a greedy algorithm the scene is progressively reconstructed to account for visibility interactions between regions and the idea of a complete scene estimate is established. Some simple and improved techniques for reliably assigning opaque voxels are developed, making use of prior information. Problems with variations in the imaging convolution kernel between images motivate the development of a pixel dissimilarity measure. Belief propagation is then applied to better utilise prior information and obtain an improved global optimum. A new volumetric factor graph model is presented which represents the joint probability distribution of the scene and imaging system. By utilising the structure of the local compatibility functions, an efficient procedure for updating the messages is detailed. To help convergence, a novel approach of accentuating beliefs is shown. Results demonstrate the validity of this approach, however the reconstruction error is similar or slightly higher than from the Greedy algorithm. To simplify the volumetric model, a new approach to belief propagation is demonstrated by applying it to a dynamic model. This approach is developed as an alternative to the full volumetric model because it is less memory and computationally intensive. Using a factor graph, a volumetric known visibility model is presented which ensures the scene is complete with respect to all the camera images. Dynamic updating is also applied to a simpler single depth-map model. Results show this approach is unsuitable for the volumetric known visibility model, however, improved results are obtained with the simple depth-map model.
93

Greedy Representative Selection for Unsupervised Data Analysis

Helwa, Ahmed Khairy Farahat January 2012 (has links)
In recent years, the advance of information and communication technologies has allowed the storage and transfer of massive amounts of data. The availability of this overwhelming amount of data stimulates a growing need to develop fast and accurate algorithms to discover useful information hidden in the data. This need is even more acute for unsupervised data, which lacks information about the categories of different instances. This dissertation addresses a crucial problem in unsupervised data analysis, which is the selection of representative instances and/or features from the data. This problem can be generally defined as the selection of the most representative columns of a data matrix, which is formally known as the Column Subset Selection (CSS) problem. Algorithms for column subset selection can be directly used for data analysis or as a pre-processing step to enhance other data mining algorithms, such as clustering. The contributions of this dissertation can be summarized as outlined below. First, a fast and accurate algorithm is proposed to greedily select a subset of columns of a data matrix such that the reconstruction error of the matrix based on the subset of selected columns is minimized. The algorithm is based on a novel recursive formula for calculating the reconstruction error, which allows the development of time and memory-efficient algorithms for greedy column subset selection. Experiments on real data sets demonstrate the effectiveness and efficiency of the proposed algorithms in comparison to the state-of-the-art methods for column subset selection. Second, a kernel-based algorithm is presented for column subset selection. The algorithm greedily selects representative columns using information about their pairwise similarities. The algorithm can also calculate a Nyström approximation for a large kernel matrix based on the subset of selected columns. In comparison to different Nyström methods, the greedy Nyström method has been empirically shown to achieve significant improvements in approximating kernel matrices, with minimum overhead in run time. Third, two algorithms are proposed for fast approximate k-means and spectral clustering. These algorithms employ the greedy column subset selection method to embed all data points in the subspace of a few representative points, where the clustering is performed. The approximate algorithms run much faster than their exact counterparts while achieving comparable clustering performance. Fourth, a fast and accurate greedy algorithm for unsupervised feature selection is proposed. The algorithm is an application of the greedy column subset selection method presented in this dissertation. Similarly, the features are greedily selected such that the reconstruction error of the data matrix is minimized. Experiments on benchmark data sets show that the greedy algorithm outperforms state-of-the-art methods for unsupervised feature selection in the clustering task. Finally, the dissertation studies the connection between the column subset selection problem and other related problems in statistical data analysis, and it presents a unified framework which allows the use of the greedy algorithms presented in this dissertation to solve different related problems.
94

An Approximation Method For Performance Measurement In Base-stock Controlled Assembly Systems

Rodoplu, Umut 01 January 2004 (has links) (PDF)
The aim of this thesis is to develop a tractable method for approximating the steady-state behavior of continuous-review base-stock controlled assembly systems with Poisson demand arrivals and manufacturing and assembly facilities modeled as Jackson networks. One class of systems studied is to produce a single type of finished product assembling a number of components and another class is to produce two types of finished products allowing component commonality. The performance measures evaluated are the expected backorders, fill rate and the stockout probability for finished product(s). A partially aggregated but exact model is approximated assuming that the state-dependent transition rates arising as a result of the partial aggregation are constant. This approximation leads to the derivation of a closed-form steady-state probability distribution, which is of product-form. Adequacy of the proposed model in approximating the steady-state performance measures is tested against simulation experiments over a large range of parameters and the approximation turns out to be quite accurate with absolute errors of 10% at most for fill rate and stockout probability, and of less than 1.37 (&amp / #8776 / 2) requests for expected backorders. A greedy heuristic which is proposed to be employed using approximate steady-state probabilities is devised to optimize base-stock levels while aiming at an overall service level for finished product(s).
95

[en] CONTAINERS ROAD TRANSPORTATION OPTIMIZATION: EXACT AND HEURISTICS METHODS / [pt] OTIMIZAÇÃO DO TRANSPORTE RODOVIÁRIO DE CONTÊINERES: MÉTODOS EXATO E HEURÍSTICO

SAULO BORGES PINHEIRO 03 September 2018 (has links)
[pt] Apesar da dimensão continental brasileira, da grandeza de sua costa e da proximidade entre o litoral e os grandes centros urbanos, o transporte de cargas em contêineres utilizando a cabotagem ainda é muito restrito no Brasil. Neste cenário, para ganhar espaço, os armadores brasileiros de cabotagem buscam oferecer serviços porta-a-porta, conseguindo economias de escala na contratação dos fornecedores que realizam as pontas rodoviárias, aumentando assim a competitividade da cabotagem com seu principal concorrente, o modal rodoviário. Neste trabalho são apresentados dois modelos que visam minimizar o custo total de contratação de fornecedores rodoviários para uma lista de demandas que devem ser atendidas. O primeiro é um modelo matemático de programação linear inteira, o segundo é um algoritmo que utiliza uma heurística gulosa. Os modelos foram desenvolvidos e testados em cenários reais, vividos por armador de cabotagem brasileiro durante um período de tempo determinado. Os resultados dos dois modelos, que são comparados entre si e com as soluções realizadas manualmente por funcionários do armador de cabotagem, mostram que as soluções dos modelos de otimização são muito melhores do que as soluções manuais. Os resultados mostram ainda que o algoritmo guloso alcança resultados muito próximos aos do método exato, mostrando ser de grande utilidade dada a facilidade de sua implantação. / [en] Despite the Brazilian continental scale, the magnitude of its coastline and the proximity between the coast and the large urban centers, the transport of cargo in containers using cabotage is still very limited in Brazil. In this scenario, the Brazilian cabotage ship-owners seek to provide door-to-door services, achieving economies of scale in procurement for suppliers that perform road ends, thus increasing the competitiveness of cabotage with its main competitor, the transportation by trucks. This work presents two models that aim to minimize the total cost of hiring road suppliers to a list of demands that must be performed. The first is a mathematical model based on integer linear programming, the second is an algorithm that uses a greedy heuristic. The models were developed and tested in real scenarios, experienced by a Brazilian cabotage ship-owner for a period of time. The results of the two models, which are compared among each other and with the manually solutions performed by the company’s employees, show that the solutions of optimization models are much better than the manual solutions. The results also show that the greedy algorithm achieves very close results to the exact method, proving to be very useful given the ease of its implementation.
96

Estudo de casos de complexidade de colorações gulosa de vértices e de arestas / Case studies of complexity of greedy colorings of vertices and edges

Oliveira, Ana Karolinna Maia de January 2011 (has links)
OLIVEIRA, Ana Karolinna Maia de. Estudo de casos de complexidade de colorações gulosa de vértices e de arestas. 2011. 58 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2011. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-05-24T19:36:17Z No. of bitstreams: 1 2011_dis_akmoliveira.pdf: 520341 bytes, checksum: b0c0d48f19d7c3e376c2c79c3a815b08 (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-05-24T19:36:55Z (GMT) No. of bitstreams: 1 2011_dis_akmoliveira.pdf: 520341 bytes, checksum: b0c0d48f19d7c3e376c2c79c3a815b08 (MD5) / Made available in DSpace on 2016-05-24T19:36:55Z (GMT). No. of bitstreams: 1 2011_dis_akmoliveira.pdf: 520341 bytes, checksum: b0c0d48f19d7c3e376c2c79c3a815b08 (MD5) Previous issue date: 2011 / The vertices and edges colorings problems, which consists in determine the smallest number of colors needed to color the vertices and edges of a graph, respectively, so that adjacent vertices and adjacent edges, respectively, have distinct colors, are computationally hard problems and recurring subject of research in graph theory due to numerous practical problems they model. In this work, we study the worst performance of greedy algorithms for coloring vertices and edges. The greedy algorithm has the following general principle: to receive, one by one, the vertices (respect. edges) of the graph to be colored by assigning always the smallest possible color to the vertex (resp. edge) to be colored. We note that so greedy coloring the edges of a graph is equivalent to greedily coloring its line graph, this being the greatest interest in research on greedy edges coloring. The worst performance of the Algorithms is measured by the greatest number of colors they can use. In the case of greedy vertex coloring, this is the number of Grundy or greedy chromatic number of the graph. For the edge coloring, this is the greedy chromatic index or Grundy index of the graph. It is known that determining the Grundy number of any graph is NP-hard. The complexity of determining the Grundy index of any graph was however an open problem. In this dissertation, we prove two complexity results. We prove that the Grundy number of a (q,q−4)-graph can be determined in polynomial time. This class contains strictly the class of cografos P4-sparse for which the same result had been established. This result generalizes so those results. The presented algorithm uses the primeval decomposition of graphs, determining the parameter in linear time. About greedy edge coloring, we prove that the problem of determining the Grundy index is NP-complete for general graphs and polynomial for catepillar graphs, implying that the Grundy number is polynomial for graphs of line of caterpillars. More specifically, we prove that the Grundy index of a caterpillar is D or D+1 and present a polynomial algorithm to determine it exactly. / Os problemas de colorac¸ ˜ao de v´ertices e de arestas, que consistem em determinar o menor n´umero de cores necess´arias para colorir os v´ertices e arestas de um grafo, respectivamente, de forma que v´ertices adjacentes e arestas adjacentes, respectivamente, possuem cores distintas, s˜ao problemas computacionalmente dif´ıceis e s˜ao objeto de pesquisa recorrente em teoria do grafos em virtude de in´umeros problemas pr´aticos que eles modelam. No presente trabalho, estudamos o pior desempenho dos algoritmos gulosos de colorac¸ ˜ao de v´ertices e de arestas. O algoritmo guloso tem o seguinte princ´ıpio geral: receber, um a um, os v´ertices (respect. as arestas) do grafo a ser colorido, atribuindo sempre a menor cor poss´ıvel ao v´ertice (resp. aresta) a ser colorido. Observamos que colorir de forma gulosa as arestas de um grafo equivale a colorir de forma gulosa o seu grafo linha, tendo sido este o maior interesse na pesquisa em colorac¸ ˜ao gulosa de arestas. O pior desempenho dos algoritmos ´e medido pelo maior n´umero de cores que eles podem utilizar. No caso da colorac¸ ˜ao gulosa de v´ertices, esse ´e o n´umero de Grundy ou n´umero crom´atico guloso do grafo. No caso da colorac¸ ˜ao de arestas, esse ´e o ´ındice crom´atico guloso ou ´ındice de Grundy do grafo. Sabe-se que determinar o n´umero de Grundy de um grafo qualquer ´e NP-dif´ıcil. A complexidade de determinar o ´ındice de Grundy de um grafo qualquer era entretanto um problema em aberto. Na presente dissertac¸ ˜ao, provamos dois resultados de complexidade. Provamos que o n´umero de Grundy de um grafo (q,q−4) pode ser determinado em tempo polinomial. Essa classe cont´em estritamente a classe dos cografos e P4-esparsos para os quais o mesmo resultado havia sido estabelecido. Esse resultado generaliza portanto aqueles resultados. O algoritmo apresentado usa a decomposic¸˜ao primeval desses grafos, determinando o parˆametro em tempo linear. No que se refere `a colorac¸ ˜ao de arestas, provamos que o problema de determinar o ´ındice de Grundy ´e NP-completo para grafos em geral e polinomial para grafos caterpillar, implicando que o n´umero de Grundy ´e polinomial para os grafos linha desses. Mais especificamente provamos que o ´ındice de Grundy dos caterpillar ´e D ou D+1 e apresentamos um algoritmo polinomial para determin´a-lo exatamente.
97

Meta-heurísticas GRASP e ILS aplicadas ao problema da variabilidade do tempo de resposta

Menezes, Wesley Willame Dias 31 July 2014 (has links)
Made available in DSpace on 2015-05-14T12:36:51Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1355559 bytes, checksum: fe1c88470e43ce75f706ec9d15cc7bb1 (MD5) Previous issue date: 2014-07-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / With the advent of technological advances, increasingly demand solutions that use fewer resources, are faster and low cost. As a result, this paper proposed a hybrid approach using metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP) and Iterated Local Search (ILS) applied to the Response Time Variability Problem (RTVP). Since this problem may involve allocation of scarce resources, such as industrial machinery or meeting rooms, going for scheduling of banking customers that require certain conditions, planning of TV ads or route taken by vehicles of logistic companies, etc. For application of the procedure, the movements of shifting symbols, swapping positions between symbols and one called double bridge, which is a mix of movements of shifting and swapping involving opposite symbols were used. The neighborhood structures were based on the movements described above, varying the number of symbols involved. Thus, the results obtained demonstrate that such procedures satisfied the problem and brought consistent results when compared with the literature. / Com o advento dos avanços tecnológicos, cada vez mais se procura soluções que utilizem menos recursos, sejam mais rápidos e de baixo custo. Em virtude disso, este trabalho propôs uma abordagem meta-heurística híbrida utilizando Greedy Randomized Adaptive Search Procedure (GRASP) e Iterated Local Search (ILS) aplicados ao Problema da Variabilidade do Tempo de Resposta. Este problema pode envolver desde alocação de recursos escassos, como por exemplo, máquinas industriais ou salas de reunião, passando pelo agendamento de clientes de um banco que requerem certas condições, o planejamento das propagandas de TV ou o percurso feito por caminhões de empresas transportadoras, dentre outros. Para a aplicação do procedimento, foram utilizados os movimentos de deslocamento do mesmo, permuta de posição entre símbolos e de um movimento chamado double brigde, que é uma mistura dos movimentos de deslocamento e permutação envolvendo símbolos opostos. As estruturas de vizinhança compostas basearam-se nos movimentos descritos anteriormente, variando a quantidade de símbolos envolvidos. Desta forma, os resultados obtidos demonstram que tais procedimentos trouxeram resultados satisfatórios ao problema e condizentes quando comparados com a literatura.
98

Mesures de similarité pour cartes généralisées / Similarity measures between generalized maps

Combier, Camille 28 November 2012 (has links)
Une carte généralisée est un modèle topologique permettant de représenter implicitementun ensemble de cellules (sommets, arêtes, faces , volumes, . . .) ainsi que l’ensemblede leurs relations d’incidence et d’adjacence au moyen de brins et d’involutions. Les cartes généralisées sont notamment utilisées pour modéliser des images et objets3D. A ce jour il existe peu d’outils permettant l’analyse et la comparaison de cartes généralisées.Notre objectif est de définir un ensemble d’outils permettant la comparaisonde cartes généralisées.Nous définissons tout d’abord une mesure de similarité basée sur la taille de la partiecommune entre deux cartes généralisées, appelée plus grande sous-carte commune.Nous définissons deux types de sous-cartes, partielles et induites, la sous-carte induitedoit conserver toutes les involutions tandis que la sous-carte partielle autorise certaines involutions à ne pas être conservées. La sous-carte partielle autorise que les involutionsne soient pas toutes conservées en analogie au sous-graphe partiel pour lequelles arêtes peuvent ne pas être toutes présentes. Ensuite nous définissons un ensembled’opérations de modification de brins et de coutures pour les cartes généralisées ainsiqu’une distance d’édition. La distance d’édition est égale au coût minimal engendrépar toutes les successions d’opérations transformant une carte généralisée en une autrecarte généralisée. Cette distance permet la prise en compte d’étiquettes, grâce à l’opérationde substitution. Les étiquettes sont posées sur les brins et permettent d’ajouter del’information aux cartes généralisées. Nous montrons ensuite, que pour certains coûtsnotre distance d’édition peut être calculée directement à partir de la plus grande souscartecommune.Le calcul de la distance d’édition est un problème NP-difficile. Nous proposons unalgorithme glouton permettant de calculer en temps polynomial une approximation denotre distance d’édition de cartes. Nous proposons un ensemble d’heuristiques baséessur des descripteurs du voisinage des brins de la carte généralisée permettant de guiderl’algorithme glouton, et nous évaluons ces heuristiques sur des jeux de test générésaléatoirement, pour lesquels nous connaissons une borne de la distance.Nous proposons des pistes d’utilisation de nos mesures de similarités dans le domainede l’analyse d’image et de maillages. Nous comparons notre distance d’éditionde cartes généralisées avec la distance d’édition de graphes, souvent utilisée en reconnaissancede formes structurelles. Nous définissons également un ensemble d’heuristiquesprenant en compte les étiquettes de cartes généralisées modélisant des images etdes maillages. Nous mettons en évidence l’aspect qualitatif de notre appariement, permettantde mettre en correspondance des zones de l’image et des points du maillages. / A generalized map is a topological model that allows to represent implicitly differenttypes of cells (vertices, edges, volumes, . . . ) and their relationship by using a set of dartsand some involutions. Generalized maps are used to model 3D meshes and images.Anyway there exists only few tools to compare theses generalized maps. Our main goalis to define some tools tolerant to error to compare them.We define a similarity measure based on the size of the common part of two generalizedmaps, called maximum common submap. Then we define two types of submaps,partial and induced, the induced submap needs to preserve all the involutions whereasthe partial one can allow some involutions to be removed. Then we define a set of operationsto modify a generalized map into another and the associated edit distance. Theedit distance is equal to the minimal cost of all the sequences of operations that modifya generalized map into the other. This edit distance can use labels to consider additionalinformation, with the operation called ’substitution’. Labels are set on darts. Wenext showa relation between our edit distance and the distance based on the maximumcommon submap.Computing theses distance are aNP-hard problem.We propose a greedy algorithmcomputing an approximation of it. We also propose a set of heuristics based on thedescription of the neighborhoob of the darts to help the greedy algorithm.We try thesesheuristics on a set of generalized maps randomly generated where a lower bound of thedistance is known. We also propose some applications of our similarity measures inthe image analysis domain. We compare our edit distance on generalized maps withthe edit distance on graphs. We also define a set of labels specific on images and 3Dmeshes. And we show that the matching computed by our algorithm construct a linkbetween images’s areas.
99

A "Greedy" Institution with Great Job Benefits: Family Structure and Gender Variation in Commitment to Military Employment

Brummond, Karen M. 17 July 2015 (has links)
Scholars describe both the military and the family as “greedy institutions,” or institutions that require expansive time and energy commitments, and alter participants’ master status (Segal 1986; Coser 1974). However, the military’s employment benefits may counteract its greedy elements. I use data from the 2008 Survey of Active Duty Members to examine commitment to military employment in wartime, accounting for greedy elements of military service (such as geographic mobility, risk of bodily harm, and separations), job benefits, family structure, and gender. The results show that women in dual-service marriages, unmarried men, and those who experienced separations reported lower career commitment and affective organizational commitment. In contrast, the use of military job benefits was positively associated with commitment. Counterintuitively, parenthood, geographic mobility, and being stationed in Afghanistan were also positively associated with commitment. These findings complicate the military’s label as a greedy institution, and contribute to the literature on work-family conflict and gendered organizations.
100

The Monk Problem : Verifier, heuristics and graph decompositions for a pursuit-evasion problem with a node-located evader

Fredriksson, Bastian, Lundberg, Edvin January 2015 (has links)
This paper concerns a specific pursuit-evasion problem with a node-located evader which we call the monk problem. First, we propose a way of verifying a strategy using a new kind of recursive systems, called EL-systems. We show how an EL-system representing a graph-instance of the problem can be represented using matrices, and we give an example of how this can be used to efficiently implement a verifier. In the later parts we propose heuristics to construct a strategy, based on a greedy algorithm. Our main focus is to minimise the number of pursuers needed, called the search number. The heuristics rely on properties of minimal stable components. We show that the minimal stable components are equivalent to the strongly connected components of a graph, and prove that the search number is equal to the maximum search number of its strongly connected components. We also establish lower and upper bounds for the search number to narrow the search space. / Denna rapport avhandlar ett specifikt pursuit-evasion problem med en hörnplacerad flykting, som vi kallar för munkproblemet. Först föreslår vi ett sätt att verifiera en strategi med en ny typ av rekursivt system, kallat EL-system. Vi visar hur ett EL-system som representerar en grafinstans av munkproblemet kan representeras med matriser, och vi ger ett exempel på hur detta kan användas för att effektivt implementera en verifikator. I de senare delarna föreslår vi heuristiker för att konstruera en strategi, baserad på giriga algoritmer. Vårt huvudfokus är att minimera antalet förföljare som krävs för att dekontaminera grafen, det så kallade söktalet. Vår heuristik förlitar sig på egenskaper för minimala stabila komponenter. Vi visar att minimala stabila komponenter är ekvivalenta med de starka komponenterna i en graf, och härleder att söktalet är lika med det maximala söktalet för grafens starka komponenter. Vi etablerar också undre och övre gränser för söktalet i syfte att minska sökintervallet.

Page generated in 0.028 seconds