Spelling suggestions: "subject:"aximum"" "subject:"amaximum""
131 |
Investigation of Maximum Mud Pressure within Sand and Clay during Horizontal Directional DrillingXia, HONGWEI 14 January 2009 (has links)
Horizontal Directional Drilling (HDD) has been used internationally for the trenchless installation of utility conduits and other infrastructure. However, the mud loss problem caused by excessive mud pressure in the borehole is still a challenge encountered by trenchless designers and contractors, especially when the drilling crosses through cohesionless material. Investigation of mud loss problem is necessary to apply HDD with greater confidence for installation of pipes and other infrastructure.
The main objectives of this research have been to investigate the maximum allowable mud pressure to prevent mud loss through finite element analysis and small scale and large scale laboratory experiments. The recent laboratory experiments on mud loss within sand are reported. Comparisons indicate that the finite element method provides an effective estimation of maximum mud pressure, and “state-of-the-art” design practice- the “Delft solution” overestimates the maximum mud pressure by more than 100%. The surface displacements exhibit a “bell” shape with the maximum surface displacement located around the center of the borehole based on the data interpreted using Particle Image Velocimetry (Geo-PIV) program.
A parametric study is carried out to investigate the effect of various parameters such as the coefficient of lateral earth pressure at rest K0 on the maximum allowable mud pressure within sand. An approximate equation is developed to facilitate design estimates of the maximum allowable mud pressure within sand.
A new approach is introduced to consider the effects of coefficient of lateral earth pressure at rest K0 on the blowout solution within clay. The evaluations using finite element method indicate that the new approach provides a better estimation of the maximum allowable mud pressure than the “Delft solution” in clay when initial ground stress state is anisotropic (K0 ≠1). Conclusion of this research and suggestions on future investigation are provided. / Thesis (Ph.D, Civil Engineering) -- Queen's University, 2009-01-14 12:23:35.069
|
132 |
Evaluation of Maximum Entropy Moment Closure for Solution to Radiative Heat Transfer EquationFan, Doreen 22 November 2012 (has links)
The maximum entropy moment closure for the two-moment approximation of the radiative
transfer equation is presented. The resulting moment equations, known as the M1 model, are solved using a finite-volume method with adaptive mesh refinement (AMR) and two Riemann-solver based flux function solvers: a Roe-type and a Harten-Lax van Leer (HLL) solver. Three different boundary schemes are also presented and discussed. When compared to the discrete ordinates method (DOM) in several representative one- and two-dimensional radiation transport problems, the results indicate that while the M1 model cannot accurately resolve multi-directional radiation transport occurring in low-absorption media, it does provide reasonably accurate solutions, both qualitatively and quantitatively, when compared to the DOM predictions in most of the test cases involving either absorbing-emitting or scattering media. The results also show that the M1 model is computationally less expensive than DOM for more realistic radiation transport problems involving scattering and complex geometries.
|
133 |
Evaluation of Maximum Entropy Moment Closure for Solution to Radiative Heat Transfer EquationFan, Doreen 22 November 2012 (has links)
The maximum entropy moment closure for the two-moment approximation of the radiative
transfer equation is presented. The resulting moment equations, known as the M1 model, are solved using a finite-volume method with adaptive mesh refinement (AMR) and two Riemann-solver based flux function solvers: a Roe-type and a Harten-Lax van Leer (HLL) solver. Three different boundary schemes are also presented and discussed. When compared to the discrete ordinates method (DOM) in several representative one- and two-dimensional radiation transport problems, the results indicate that while the M1 model cannot accurately resolve multi-directional radiation transport occurring in low-absorption media, it does provide reasonably accurate solutions, both qualitatively and quantitatively, when compared to the DOM predictions in most of the test cases involving either absorbing-emitting or scattering media. The results also show that the M1 model is computationally less expensive than DOM for more realistic radiation transport problems involving scattering and complex geometries.
|
134 |
The geology and geomorphology of the Denton Hills, Southern Victoria Land, Antarctica.Carson, Nicholas Joseph January 2012 (has links)
This research is an integrated geological and geomorphological study into the Denton Hills area. The study area is part of the foothills to the Transantarctic Mountains, which divides East and West Antarctica, allowing an opportunity to investigate glacial events from both sides. As the study area is ice-free, it has allows good examination of the bedrock geology and has preserved geomorphological features allowing them to be examined and sampled.
Comprehensive geological map and geomorphological maps have been produced, extending the knowledge into the spatial distribution of units and features. Both the geological and geomorphological maps reveal a complex history of evolution. The original geological units have been subjected to deformation and intrusion of large plutons. The geomorphological mapping shows ice has flowed in alternate direction through the valleys, and the valleys have had long periods where they have been occupied by large proglacial lakes. As the Antarctic ice sheets expanded they flowed into the valleys either from the west, the Royal Society Range draining the East Antarctic Ice Sheet or from the east, McMurdo Sound. Ice would flow from McMurdo Sound when the West Antarctic Ice Sheet expanded causing the grounding line of the ice sheet to move north through the Ross Sea.
Surface exposure dating completed during the study has correlated the timing of glacial events to global cycles. The dating confirmed the presence of the large proglacial lake during the Last Glacial Maximum in the Miers Valley, which drained about 14 ka. The Garwood Glacier has also been directly linked to the Last Glacial Maximum with a moraine forming about 22 ka. The dating has also shown that during the Last Glacial Maximum there was little fluctuation in the size of glaciers draining the East Antarctic Ice Sheet, with features being date to the onset of the Last Glacial Maximum.
|
135 |
Sequential and Parallel Algorithms for the Generalized Maximum Subarray ProblemBae, Sung Eun January 2007 (has links)
The maximum subarray problem (MSP) involves selection of a segment of consecutive array elements that has the largest possible sum over all other segments in a given array. The efficient algorithms for the MSP and related problems are expected to contribute to various applications in genomic sequence analysis, data mining or in computer vision etc. The MSP is a conceptually simple problem, and several linear time optimal algorithms for 1D version of the problem are already known. For 2D version, the currently known upper bounds are cubic or near-cubic time. For the wider applications, it would be interesting if multiple maximum subarrays are computed instead of just one, which motivates the work in the first half of the thesis. The generalized problem of K-maximum subarray involves finding K segments of the largest sum in sorted order. Two subcategories of the problem can be defined, which are K-overlapping maximum subarray problem (K-OMSP), and K-disjoint maximum subarray problem (K-DMSP). Studies on the K-OMSP have not been undertaken previously, hence the thesis explores various techniques to speed up the computation, and several new algorithms. The first algorithm for the 1D problem is of O(Kn) time, and increasingly efficient algorithms of O(K² + n logK) time, O((n+K) logK) time and O(n+K logmin(K, n)) time are presented. Considerations on extending these results to higher dimensions are made, which contributes to establishing O(n³) time for 2D version of the problem where K is bounded by a certain range. Ruzzo and Tompa studied the problem of all maximal scoring subsequences, whose definition is almost identical to that of the K-DMSP with a few subtle differences. Despite slight differences, their linear time algorithm is readily capable of computing the 1D K-DMSP, but it is not easily extended to higher dimensions. This observation motivates a new algorithm based on the tournament data structure, which is of O(n+K logmin(K, n)) worst-case time. The extended version of the new algorithm is capable of processing a 2D problem in O(n³ + min(K, n) · n² logmin(K, n)) time, that is O(n³) for K ≤ n/log n For the 2D MSP, the cubic time sequential computation is still expensive for practical purposes considering potential applications in computer vision and data mining. The second half of the thesis investigates a speed-up option through parallel computation. Previous parallel algorithms for the 2D MSP have huge demand for hardware resources, or their target parallel computation models are in the realm of pure theoretics. A nice compromise between speed and cost can be realized through utilizing a mesh topology. Two mesh algorithms for the 2D MSP with O(n) running time that require a network of size O(n²) are designed and analyzed, and various techniques are considered to maximize the practicality to their full potential.
|
136 |
Modelling oxygen isotopes in the UVic Earth System Climate Model under preindustrial and Last Glacial Maximum conditions: impact of glacial-interglacial sea ice variability on seawater d18OBrennan, Catherine Elizabeth 10 September 2012 (has links)
Implementing oxygen isotopes (H218O, H216O) in coupled climate models provides both an important test of the individual model's hydrological cycle, and a powerful tool to mechanistically explore past climate changes while producing results directly comparable to isotope proxy records. The addition of oxygen isotopes in the University of Victoria Earth System Climate Model (UVic ESCM) is described. Equilibrium simulations are performed for preindustrial and Last Glacial Maximum (LGM) conditions. The oxygen isotope content in the model's preindustrial climate is compared against observations for precipitation and seawater. The distribution of oxygen isotopes during the LGM is compared against available paleo-reconstructions.
Records of temporal variability in the oxygen isotopic composition of biogenic carbonates from ocean sediment cores inform our understanding of past continental ice volume and ocean temperatures. Interpretation of biogenic carbonate d18O variability typically neglects changes due to factors other than ice volume and temperature, equivalent to assuming constant local seawater isotopic composition. This investigation focuses on whether sea ice, which fractionates seawater during its formation, could shift the isotopic value of seawater during distinct climates. Glacial and interglacial states are simulated with the isotope-enabled UVic ESCM, and a global analysis is performed. Results indicate that interglacial-glacial sea ice variability produces as much as a 0.13 permil shift in local seawater, which corresponds to a potential error in local paleotemperature reconstruction of approximately 0.5 C. Isotopic shifts due to sea ice variability are concentrated in the Northern Hemisphere, specifically in the Labrador Sea and northeastern North Atlantic. / Graduate
|
137 |
Convex relaxation for the planted clique, biclique, and clustering problemsAmes, Brendan January 2011 (has links)
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product |U | · |V |, known as the maximum edge biclique problem. We show that finding a clique or biclique of a given size in a graph is equivalent to finding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NP-hard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the first,
the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with fixed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem.
We extend these results and techniques to the k-disjoint-clique problem. The maximum node k-disjoint-clique problem is to find a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge
weights w, the maximum mean weight k-disjoint-clique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in different clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into k-disjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight k-disjoint-clique problem.
We show that both instances of the k-disjoint-clique problem can be formulated as rank constrained optimization problems and relaxed to semidefinite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semidefinite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.
|
138 |
A Novel Sensorless Support Vector Regression Based Multi-Stage Algorithm to Track the Maximum Power Point for Photovoltaic SystemsIbrahim, Ahmad Osman January 2012 (has links)
Solar energy is the energy derived from the sun through the form of solar radiation. Solar powered electrical generation relies on photovoltaic (PV) systems and heat engines. These two technologies are widely used today to provide power to either standalone loads or for connection to the power system grid.
Maximum power point tracking (MPPT) is an essential part of a PV system. This is needed in order to extract maximum power output from a PV array under varying atmospheric conditions to maximize the return on initial investments. As such, many MPPT methods have been developed and implemented including perturb and observe (P&O), incremental conductance (IC) and Neural Network (NN) based algorithms. Judging between these techniques is based on their speed of locating the maximum power point (MPP) of a PV array under given atmospheric conditions, besides the cost and complexity of implementing them. The P&O and IC algorithms have a low implementation complexity but their tracking speed is sluggish. NN based techniques are faster than P&O and IC. However, they may not provide the global optimal point since they are prone to multiple local minima. To overcome the demerits of the aforementioned methods, support vector regression (SVR) based strategies have been proposed for the estimation of solar irradiation (for MPPT). A significant advantage of SVR based strategies is that it can provide the global optimal point, unlike NN based methods. In the published literature of SVR based MPPT algorithms, however, researchers have assumed a constant temperature. The assumption is not plausible in practice as the temperature can vary significantly during the day. The temperature variation, in turn, can remarkably affect the effectiveness of the MPPT process; the inclusion of temperature measurements in the process will add to the cost and complexity of the overall PV system, and it will also reduce the reliability of the system.
The main goal of this thesis is to present a novel sensorless SVR based multi-stage algorithm (MSA) for MPPT in PV systems. The proposed algorithm avoids outdoor irradiation and temperature sensors. The proposed MSA consists of three stages: The first stage estimates the initial values of irradiation and temperature; the second stage instantaneously estimates the irradiation with the assumption that the temperature is constant over one-hour time intervals; the third stage updates the estimated value of the temperature once every one hour. After estimating the irradiation and temperature, the voltage corresponding to the MPP is estimated, as well. Then, the reference PV voltage is given to the power electronics interface. The proposed strategy is robust to rapid changes in solar irradiation and load, and it is also insensitive to ambient temperature variations. Simulations studies in PSCAD/EMTDC and Matlab demonstrate the effectiveness of the proposed technique.
|
139 |
Statistical method in a comparative study in which the standard treatment is superior to othersIkeda, Mitsuru, Shimamoto, Kazuhiro, Ishigaki, Takeo, Yamauchi, Kazunobu, 池田, 充, 山内, 一信 11 1900 (has links)
No description available.
|
140 |
Scaling conditional random fields for natural language processingCohn, Trevor A Unknown Date (has links) (PDF)
This thesis deals with the use of Conditional Random Fields (CRFs; Lafferty et al. (2001)) for Natural Language Processing (NLP). CRFs are probabilistic models for sequence labelling which are particularly well suited to NLP. They have many compelling advantages over other popular models such as Hidden Markov Models and Maximum Entropy Markov Models (Rabiner, 1990; McCallum et al., 2001), and have been applied to a number of NLP tasks with considerable success (e.g., Sha and Pereira (2003) and Smith et al. (2005)). Despite their apparent success, CRFs suffer from two main failings. Firstly, they often over-fit the training sample. This is a consequence of their considerable expressive power, and can be limited by a prior over the model parameters (Sha and Pereira, 2003; Peng and McCallum, 2004). Their second failing is that the standard methods for CRF training are often very slow, sometimes requiring weeks of processing time. This efficiency problem is largely ignored in current literature, although in practise the cost of training prevents the application of CRFs to many new more complex tasks, and also prevents the use of densely connected graphs, which would allow for much richer feature sets. (For complete abstract open document)
|
Page generated in 0.0465 seconds