• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 391
  • 85
  • 67
  • 50
  • 27
  • 13
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 790
  • 219
  • 112
  • 82
  • 67
  • 58
  • 56
  • 55
  • 55
  • 55
  • 52
  • 52
  • 51
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Impact of polymer type, dosage, and mixing regime and sludge type on sludge floc properties

Kolda, Bridget C. 17 January 2009 (has links)
This research investigated the impact of sludge type, polymer type (percent mole charge), dosage, mixing rate, and solution ionic strength on bound water content of sludge flocs. Data determined to evaluate the extent of dewatering included: percent dry solids, bulk density, bound water content (determined by dilatometric method), floc density (determined by isopycnic centrifugation), and cake solids concentrations. Calculated floc densities and bound water contents were compared with measured values. The polymer mole charge had marginal impact on bound water content. The optimal polymer dose as determined by dose curves did not necessarily result in the least bound water content. The mixing rate did not have an impact on bound water content of the chemical sludge, but did have an impact on bound water content of the biological sludge. However, the percentage of total water removed that was due to bound water removal was not affected by rate of mixing, polymer mole charge, or polymer dose. Altering solution ionic strength did not appear to improve bound water removal. The calculated bound water content values determined using measured floc densities were consistently greater than the measured bound water content values determined by dilatometric method. The bound water content per the dilatometric method did not account for all the water present in the floes as determined by the isopycnic centrifugation method. / Master of Science
62

Influence of Plant Age, Soil Moisture, and Temperature Cylcing Date on Containter-Grown Herbaceous Perennials

Kingsley-Richards, Sarah 18 July 2011 (has links)
Perennial growers overwintering plant stock require information to assist in deciding which containerized plants are most likely to successfully overwinter. Three studies on container-grown herbaceous perennials were conducted to examine the influence of plant age, soil moisture, and temperature cycling date on cold hardiness. In January, plants were exposed to controlled freezing temperatures of -2, -5, -8, -11, and -14C and then returned to a 3-5C greenhouse. In June, plants were assessed using a visual rating scale of 1-5 (1 = dead, 3-5 = increasing salable quality, varying by cultivar) and dry weights of new growth were determined. Controlled freezing in November and March were also included in the third study. In the first study, two ages of plants were exposed to controlled freezing temperatures in January. For Geranium x cantabrigiense 'Karmina', age had no effect on either rating or dry weight in one study year. In two Sedum 'Matrona' study years, age had no effect on dry weight but ratings were higher for older plants than younger plants in the first year and higher for younger plants than older plants in the second year. In two Leucanthemum x superbum 'Becky' study years, age had an effect on both rating and dry weight which were both generally higher for younger plants than older plants. In the second study, plants were maintained in pots at two different soil moisture levels prior to exposure to controlled freezing temperatures in January. Coreopsis 'Tequila Sunrise' and Carex morrowii 'Ice Dance' showed no effect on either rating or dry weight from soil moisture level. Soil moisture level had no effect on dry weight but ratings were higher for Geranium x cantabrigiense 'Cambridge' “wet” plants and for Heuchera 'Plum Pudding' “dry” plants. Carex laxiculmus 'Hobb' (Bunny Blue™) soil moisture level had an effect where dry weight was higher for “dry” plants. Means at were of salable quality for Geranium and Heuchera at all temperatures and Carex laxiculmus at temperatures above -11C. The effects of soil moisture level on Carex oshimensis were inconclusive. In the third study, during November, January, and March, plants were subjected to temperature cycling treatments prior to exposure to controlled freezing temperatures. Geranium x cantabrigiense 'Cambridge' were more tolerant of both temperature cycling and freezing temperatures in January and an increased number of cycles in November had an advantageous effect. Sedum 'Matrona' were more tolerant of temperature cycling and freezing temperatures in January and an increased number of cycles in March had an advantageous effect. Leucanthemum x superbum 'Becky' were more tolerant of temperature cycling in January in the second year of the study and an increased number of cycles in November had an advantageous effect in the first year and in all months in the second year. Overwintering younger container-grown plants is likely to result in more growth and higher quality following exposure to freezing temperatures. Effects of soil moisture level on overwintering container-grown plant growth and quality are cultivar-specific and a general effect could not be established in these studies. Overwintering container-grown plants are likely to be hardier in January and slight temperature cycles prior to exposure to freezing temperatures generally increase hardiness.
63

Aplicação do método branch-and-bound na programação de tarefas em uma única máquina com data de entrega comum sob penalidades de adiantamento e atraso. / Branch-and-bound method application in a single machine earliness/tardiness scheduling problem with a common due date.

Kawamura, Márcio Seiti 07 April 2006 (has links)
O objetivo desse trabalho é o de estudar o problema de programação de tarefas num ambiente produtivo com uma única máquina com data comum de entrega. Nesse caso, as tarefas, depois de processadas uma única vez na máquina, devem ser entregues em uma data comum e sofrem penalidades de adiantamento e de atraso conforme o instante em que são completadas. Na prática, esse problema é encontrado em casos de pedidos de lotes de produtos com data de entrega comum préespecificada, embarques para exportação e material químico ou misturas que têm vida média de curta duração. Problemas desse tipo são NP-hard (Hall, Kubiak & Sethi, 1991; Hoogeven & van de Velde, 1991), sendo comumente tratados na literatura através de heurísticas e meta-heurísticas. Visto não ser de nosso conhecimento a existência na literatura de tratamento desse problema através de métodos exatos, propôs-se a utilização de um algoritmo do tipo branch-and-bound para obtenção da solução ótima do problema que minimize a soma das penalidades de adiantamento e de atraso. No desenvolvimento do algoritmo, a utilização de propriedades do problema foi importante na elaboração de limitantes inferiores e regras de dominância que melhoraram a eficiência do modelo. Os experimentos realizados avaliaram o desempenho de diferentes critérios elaborados, como escolha do nó pai, limitante inferior, ordem de execução das estratégias e ordem de construção da seqüência. Os resultados obtidos mostraram-se robustos quando comparados com o benchmark da literatura e revelaram o bom desempenho do modelo para problemas de pequeno porte, superando o desempenho de programas de otimização comerciais. / The objective of this work is to study the single-machine scheduling problem with a common due date. In this case, jobs, after be processed only once in the machine, must be delivered in a common due date and they are penalized of earliness or tardiness according to their completion time. This problem is found in cases of batch production with prespecified common due date, exportation shipping and chemical material that has short half-life period. This kind of problem is NP-hard (Hall, Kubiak & Sethi, 1991; Hoogeven & van de Velde, 1991) and it has been treated in the literature by heuristics and meta-heuristics. Not having knowledge about previous treatment by exact methods in the literature, it was proposed the implementation of a branch-and-bound algorithm to obtain the optimal solution that minimizes the total weighted earliness and tardiness penalties. In the development of the algorithm, the utilization of problem properties was important to the elaboration of lower bounds and pruning rules that have enhanced the efficiency of the model. The realized tests have evaluated the performance of different criteria, like the choice of father node, lower bound, strategy execution order and sequence construction order. The obtained results have demonstrated robustness comparing to benchmark and they have revealed the good working of the model for small problems, overcoming optimization software performance.
64

Um método híbrido para o problema de dimensionamento de lotes / A hybrid method for the lot sizing problem

Cherri, Luiz Henrique 27 February 2013 (has links)
Neste trabalho, abordamos métodos de resolução para o problema de dimensionamento de lotes que contempla o planejamento da produção de vários produtos em múltiplas máquinas. A fabricação dos produtos consome tempo de produção e preparação de uma capacidade de produção limitada. A demanda pelos produtos é conhecida e pode ser atendida com atraso durante um horizonte de planejamento finito. O objetivo é minimizar a soma dos custos de produção, preparação para a produção, estoque dos produtos e atraso na entrega destes. Em uma primeira etapa, desenvolvemos uma busca tabu determinística baseada em outra, aleatória, que foi apresentada na literatura. Com isso, realizamos uma análise sobre a influência de fatores aleatórios sobre heurísticas do tipo busca tabu quando aplicadas ao problema estudado. Posteriormente, desenvolvemos um método híbrido baseado em busca tabu, branch-and-cut e programação linear para a resolução do problema. Nos testes computacionais realizados, o método proposto mostrou-se competitivo quando comparado a outras heurísticas apresentadas na literatura / This paper proposes two methods to solve the capacitated lot-sizing problem with multiple products and parallel machines. The manufacturing of products consumes machines capacity (production time and setup time), which is scarce. The demand for the products is known and can be met with backlogging. The objective is to minimize the sum of production, setup, holding and backlog costs. In a first step, we developed a deterministic tabu search heuristic based on a random version from the literature and then conducted an analysis of the influence of random factors on tabu search heuristics when applied to solve the studied problem. Subsequently, we designed a hybrid method based on tabu search, branch-andcut and linear programming. Computational experiments show that this hybrid method is competitive with other heuristics presented in the literature
65

Minimização de funções decomponíveis em curvas em U definidas sobre cadeias de posets -- algoritmos e aplicações / Minimization of decomposable in U-shaped curves functions defined on poset chains -- algorithms and applications

Reis, Marcelo da Silva 28 November 2012 (has links)
O problema de seleção de características, no contexto de Reconhecimento de Padrões, consiste na escolha de um subconjunto X de um conjunto S de características, de tal forma que X seja \"ótimo\" dentro de algum critério. Supondo a escolha de uma função custo c apropriada, o problema de seleção de características é reduzido a um problema de busca que utiliza c para avaliar os subconjuntos de S e assim detectar um subconjunto de características ótimo. Todavia, o problema de seleção de características é NP-difícil. Na literatura existem diversos algoritmos e heurísticas propostos para abordar este problema; porém, quase nenhuma dessas técnicas explora o fato que existem funções custo cujos valores são estimados a partir de uma amostra e que descrevem uma \"curva em U\" nas cadeias do reticulado Booleano (P(S),<=), um fenômeno bem conhecido em Reconhecimento de Padrões: conforme aumenta-se o número de características consideradas, há uma queda no custo do subconjunto avaliado, até o ponto em que a limitação no número de amostras faz com que seguir adicionando características passe a aumentar o custo, devido ao aumento no erro de estimação. Em 2010, Ris e colegas propuseram um novo algoritmo para resolver esse caso particular do problema de seleção de características, que aproveita o fato de que o espaço de busca pode ser organizado como um reticulado Booleano, assim como a estrutura de curvas em U das cadeias do reticulado, para encontrar um subconjunto ótimo. Neste trabalho estudamos a estrutura do problema de minimização de funções custo cujas cadeias são decomponíveis em curvas em U (problema U-curve), provando que o mesmo é NP-difícil. Mostramos que o algoritmo de Ris e colegas possui um erro que o torna de fato sub-ótimo, e propusemos uma versão corrigida e melhorada do mesmo, o algoritmo U-Curve-Search (UCS). Apresentamos também duas variações do algoritmo UCS que controlam o espaço de busca de forma mais sistemática. Introduzimos dois novos algoritmos branch-and-bound para abordar o problema, chamados U-Curve-Branch-and-Bound (UBB) e Poset-Forest-Search (PFS). Para todos os algoritmos apresentados nesta tese, fornecemos análise de complexidade de tempo e, para alguns deles, também prova de corretude. Implementamos todos os algoritmos apresentados utilizando o arcabouço featsel, também desenvolvido neste trabalho; realizamos experimentos ótimos e sub-ótimos com instâncias de dados reais e simulados e analisamos os resultados obtidos. Por fim, propusemos um relaxamento do problema U-curve que modela alguns tipos de projeto de classificadores; também provamos que os algoritmos UCS, UBB e PFS resolvem esta versão generalizada do problema. / The feature selection problem, in the context of Pattern Recognition, consists in the choice of a subset X of a set S of features, such that X is \"optimal\" under some criterion. If we assume the choice of a proper cost function c, then the feature selection problem is reduced to a search problem, which uses c to evaluate the subsets of S, therefore finding an optimal feature subset. However, the feature selection problem is NP-hard. Although there are a myriad of algorithms and heuristics to tackle this problem in the literature, almost none of those techniques explores the fact that there are cost functions whose values are estimated from a sample and describe a \"U-shaped curve\" in the chains of the Boolean lattice o (P(S),<=), a well-known phenomenon in Pattern Recognition: for a fixed number of samples, the increase in the number of considered features may have two consequences: if the available sample is enough to a good estimation, then it should occur a reduction of the estimation error, otherwise, the lack of data induces an increase of the estimation error. In 2010, Ris et al. proposed a new algorithm to solve this particular case of the feature selection problem: their algorithm takes into account the fact that the search space may be organized as a Boolean lattice, as well as that the chains of this lattice describe a U-shaped curve, to find an optimal feature subset. In this work, we studied the structure of the minimization problem of cost functions whose chains are decomposable in U-shaped curves (the U-curve problem), and proved that this problem is actually NP-hard. We showed that the algorithm introduced by Ris et al. has an error that leads to suboptimal solutions, and proposed a corrected and improved version, the U-Curve-Search (UCS) algorithm. Moreover, to manage the search space in a more systematic way, we also presented two modifications of the UCS algorithm. We introduced two new branch-and-bound algorithms to tackle the U-curve problem, namely U-Curve-Branch-and-Bound (UBB) and Poset-Forest-Search (PFS). For each algorithm presented in this thesis, we provided time complexity analysis and, for some of them, also proof of correctness. We implemented each algorithm through the featsel framework, which was also developed in this work; we performed optimal and suboptimal experiments with instances from real and simulated data, and analyzed the results. Finally, we proposed a generalization of the U-curve problem that models some kinds of classifier design; we proved the correctness of the UCS, UBB, and PFS algorithms for this generalized version of the U-curve problem.
66

Bounded Eigenvalues of Fully Clamped and Completely Free Rectangular Plates

Mochida, Yusuke January 2007 (has links)
Exact solution to the vibration of rectangular plates is available only for plates with two opposite edges subject to simply supported conditions. Otherwise, they are analysed by using approximate methods. There are several approximate methods to conduct a vibration analysis, such as the Rayleigh-Ritz method, the Finite Element Method, the Finite Difference Method, and the Superposition Method. The Rayleigh-Ritz method and the finite element method give upper bound results for the natural frequencies of plates. However, there is a disadvantage in using this method in that the error due to discretisation cannot be calculated easily. Therefore, it would be good to find a suitable method that gives lower bound results for the natural frequencies to complement the results from the Rayleigh-Ritz method. The superposition method is also a convenient and efficient method but it gives lower bound solution only in some cases. Whether it gives upper bound or lower bound results for the natural frequencies depends on the boundary conditions. It is also known that the finite difference method always gives lower bound results. This thesis presents bounded eigenvalues, which are dimensionless form of natural frequencies, calculated using the superposition method and the finite difference method. All computations were done using the MATLAB software package. The convergence tests show that the superposition method gives a lower bound for the eigenvalues of fully clamped plates, and an upper bound for the completely free plates. It is also shown that the finite difference method gives a lower bound for the eigenvalues of completely free plates. Finally, the upper bounds and lower bounds for the eigenvalues of fully clamped and completely free plates are given.
67

Performance analysis of symbol timing estimators for time-varying MIMO channels

Panduru, Flaviu Gabriel 15 November 2004 (has links)
The purpose of this thesis is to derive and analyze the theoretical limits for estimatingthe symboltiming delayof Multiple-Input Multiple-Output (MIMO)systems. Two main N X M system models are considered, where N represents the number of transmit antennas and M denotes the number of receive antennas, the 2 X 2 system used by S.-A. Yangand J. Wu and the 4 X 4system used by Y.-C. Wu and E. Serpedin. The second model has been extended to take into account the symbol time-varying fading. The theoretical estimation limits are shown by several bounds: modified Cramer-Rao bound (MCRB), Cramer-Rao bound (CRB) and Barankin bound (BB). BB will be exploited to obtain accurate information regarding the necessary length of data to obtain good estimation. Two scenarios for synchronization are presented: data-aided (DA) and non-data-aided (NDA). Two models for the fading process are considered: block fading and symbol time-varying fading, respectively, the second case being assumed to be Rayleigh distributed. The asymptotic Cramer-Rao bounds for low signal-to-noise ratio (low-SNR) and for high-SNR are derived and the performance of several estimators is presented. The performance variation of bounds and estimators is studied byvarying different parameters, such as the number of antennas, the length of data taken into consideration during the estimation process, the SNR, the oversampling factor, the power and the Doppler frequency shift of the fading.
68

Performance analysis of symbol timing estimators for time-varying MIMO channels

Panduru, Flaviu Gabriel 15 November 2004 (has links)
The purpose of this thesis is to derive and analyze the theoretical limits for estimatingthe symboltiming delayof Multiple-Input Multiple-Output (MIMO)systems. Two main N X M system models are considered, where N represents the number of transmit antennas and M denotes the number of receive antennas, the 2 X 2 system used by S.-A. Yangand J. Wu and the 4 X 4system used by Y.-C. Wu and E. Serpedin. The second model has been extended to take into account the symbol time-varying fading. The theoretical estimation limits are shown by several bounds: modified Cramer-Rao bound (MCRB), Cramer-Rao bound (CRB) and Barankin bound (BB). BB will be exploited to obtain accurate information regarding the necessary length of data to obtain good estimation. Two scenarios for synchronization are presented: data-aided (DA) and non-data-aided (NDA). Two models for the fading process are considered: block fading and symbol time-varying fading, respectively, the second case being assumed to be Rayleigh distributed. The asymptotic Cramer-Rao bounds for low signal-to-noise ratio (low-SNR) and for high-SNR are derived and the performance of several estimators is presented. The performance variation of bounds and estimators is studied byvarying different parameters, such as the number of antennas, the length of data taken into consideration during the estimation process, the SNR, the oversampling factor, the power and the Doppler frequency shift of the fading.
69

An alternative training & practicing campus

Wong, Kwun-wah., 黃觀華. January 1999 (has links)
published_or_final_version / Architecture / Master / Master of Architecture
70

The Minimum Witt Index of a Graph

Elzinga, Randall J. 17 September 2007 (has links)
An independent set in a graph G is a set of pairwise nonadjacent vertices, and the maximum size, alpha(G), of an independent set in G is called the independence number. Given a graph G and weight matrix A of G with entries from some field F, the maximum dimension of an A-isotropic subspace, known as the Witt index of A, is an upper bound on alpha(G). Since any weight matrix can be used, it is natural to seek the minimum upper bound on the independence number of G that can be achieved by a weight matrix. This minimum, iota_F^*(G), is called the minimum Witt index of G over F, and the resulting bound, alpha(G)<= iota_F^*(G), is called the isotropic bound. When F is finite, the possible values of iota_F^*(G) are determined and the graphs that attain the isotropic bound are characterized. The characterization is given in terms of graph classes CC(n,t,c) and CK(n,t,k) constructed from certain spanning subgraphs called C(n,t,c)-graphs and K(n,t,k)-graphs. Here t is the term rank of the adjacency matrix of G. When F=R, the isotropic bound is known as the Cvetkovi\'c bound. It is shown that it is sufficient to consider a finite number of weight matrices A when determining iota_R^*(G) and that, in many cases, two weight values suffice. For example, if the vertex set of G can be covered by alpha(G) cliques, then G attains the Cvetkovi\'c bound with a weight matrix with two weight values. Inequalities on alpha and iota_F^* resulting from graph operations such as sums, products, vertex deletion, and vertex identification are examined and, in some cases, conditions that imply equality are proved. The equalities imply that the problem of determining whether or not alpha(G)=iota_F^*(G) can be reduced to that of determining iota_F^*(H) for certain crucial graphs H found from G. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2007-09-04 15:38:47.57

Page generated in 0.0386 seconds