Spelling suggestions: "subject:"heighted"" "subject:"eighted""
341 |
Predicting future spatial distributions of population and employment for South East Queensland – a spatial disaggregation approachTiebei Li Unknown Date (has links)
The spatial distribution of future population and employment has become a focus of recent academic enquiry and planning policy concerns. This is largely driven by the rapid urban expansion in major Australian cities and the need to plan ahead for new housing growth and demand for urban infrastructure and services. At a national level forecasts for population and employment are produced by the government and research institutions; however there is a further need to break these forecasts down to a disaggregate geographic scale for growth management within regions. Appropriate planning for the urban growth needs forecasts for fine-grained spatial units. This thesis has developed methodologies to predict the future settlement of the population, employment and urban form by applying a spatial disaggregation approach. The methodology uses the existing regional forecasts reported at regional geographic units and applies a novel spatially-based technique to step-down the regional forecasts to smaller geographical units. South East Queensland (SEQ) is the experimental context for the methodologies developed in the thesis, being one of the fastest-growing metropolitan regions in Australia. The research examines whether spatial disaggregation methodologies that can be used to enhance the forecasts for urban planning purposes and to derive a deeper understanding of the urban spatial structure under growth conditions. The first part of this thesis develops a method by which the SEQ population forecasts can be spatially disaggregated. This is related to a classical problem in geographical analysis called to modifiable area unit problem, where spatial data disaggregation may give inaccurate results due to spatial heterogeneity in the explanatory variables. Several statistical regression and dasymetric techniques are evaluated to spatially disaggregate population forecasts over the study area and to assess their relative accuracies. An important contribution arising from this research is that: i) it extends the dasymetric method beyond its current simple form to techniques that incorporate more complex density assumptions to disaggregate the data and, ii) it selects a method based on balancing the costs and errors of the disaggregation for a study area. The outputs of the method are spatially disaggregated population forecasts across the smaller areas that can be directly used for urban form analysis and are also directly available for subsequent employment disaggregation. The second part in this thesis develops a method to spatially disaggregate the employment forecasts and examine their impact on the urban form. A new method for spatially disaggregating the employment data is evaluated; it analyses the trend and spatial pattern of historic regional employment patterns based on employment determinants (for example, the local population and the proximity of an area to a shopping centre). The method we apply, namely geographically weighted regression (GWR), accounts for spatial effects of data autocorrelation and heterogeneity. Autocorrelation is where certain variables for employment determinants are related in space, and hence violate traditional statistical independence assumptions, and heterogeneity is where the associations between variables change across space. The method uses a locally-fitted relationship to estimate employment in the smaller geography whilst being constrained by the regional forecast. Results show that, by accounting for spatial heterogeneity in the local dependency of employment, the GWR method generates superior estimates over a global regression model. The spatially disaggregate projections developed in this thesis can be used to better understand questions on urban form. From a planning perspective, the results of spatial disaggregation indicate that the future growth of the population for SEQ is likely to maintain a spatially-dispersed growth pattern, whilst the employment is likely to follow a more polycentric distribution focused around the new activity centres. Overall, the thesis demonstrates that the spatial disaggregation method can be applied to supplement the regional forecasts to seek a deeper understanding of the future urban growth patterns. The development, application and validation of the spatial disaggregation methods will enhance the planner’s toolbox whilst responding to the data issues to inform urban planning and future development in a region.
|
342 |
Carlson type inequalities and their applicationsLarsson, Leo January 2003 (has links)
<p>This thesis treats inequalities of Carlson type, i.e. inequalities of the form</p><p><mml:math><mml:semantics><mml:mrow><mml:mrow><mml:msub><mml:mi>∥f∥</mml:mi><mml:mi>x</mml:mi></mml:msub><mml:mo mml:stretchy="false">≤</mml:mo><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∏</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:msubsup><mml:mi>∥f∥</mml:mi><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mi>i</mml:mi></mml:msub></mml:msubsup></mml:mrow></mml:mrow></mml:semantics></mml:math></p><p>where <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mn>i </mml:mn></mml:msub></mml:mrow><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:semantics></mml:math> and <i>K</i> is some constant, independent of the function <i>f</i>. <i>X</i> and <mml:math><mml:semantics><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:semantics></mml:math> are normed spaces, embedded in some Hausdorff topological vector space. In most cases, we have <mml:math><mml:semantics><mml:mrow><mml:mi>m</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math>, and the spaces involved are weighted Lebesgue spaces on some measure space. For example, the inequality</p><p><mml:math><mml:semantics><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∫</mml:mo><mml:mn>0</mml:mn><mml:mo mml:stretchy="false">∞</mml:mo></mml:munderover><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi mml:fontstyle="italic">dx</mml:mi><mml:mo mml:stretchy="false">≤</mml:mo><mml:msqrt><mml:mo mml:stretchy="false">π</mml:mo></mml:msqrt></mml:mrow><mml:msup><mml:mfenced mml:open="(" mml:close=")"><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∫</mml:mo><mml:mn>0</mml:mn><mml:mo mml:stretchy="false">∞</mml:mo></mml:munderover><mml:msup><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow></mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo mml:stretchy="false">/</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mfenced mml:open="(" mml:close=")"><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∫</mml:mo><mml:mn>0</mml:mn><mml:mo mml:stretchy="false">∞</mml:mo></mml:munderover><mml:msup><mml:mi>x</mml:mi><mml:mn>2 </mml:mn></mml:msup></mml:mrow><mml:msup><mml:mi>f</mml:mi><mml:mn>2 </mml:mn></mml:msup><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow></mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo mml:stretchy="false">/</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:semantics></mml:math></p><p>first proved by F. Carlson, is the above inequality with <mml:math><mml:semantics><mml:mrow><mml:mi>m</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math>, <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mn>1 </mml:mn></mml:msub><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mn>2 </mml:mn></mml:msub></mml:mrow><mml:mo mml:stretchy="false">=</mml:mo><mml:mfrac><mml:mn>1 </mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:semantics></mml:math>, <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:mi>X</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mo mml:stretchy="false">ℝ</mml:mo><mml:mrow><mml:mo mml:stretchy="false">+</mml:mo><mml:mn>, </mml:mn></mml:mrow></mml:msub><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mn>, </mml:mn><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>1 </mml:mn></mml:msub><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mn>2 </mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mo mml:stretchy="false">ℝ</mml:mo><mml:mrow><mml:mo mml:stretchy="false">+</mml:mo><mml:mn>, </mml:mn></mml:mrow></mml:msub><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math> and <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>2 </mml:mn></mml:msub><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mn>2 </mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mo mml:stretchy="false">ℝ</mml:mo><mml:mrow><mml:mo mml:stretchy="false">+</mml:mo><mml:mn>, </mml:mn></mml:mrow></mml:msub><mml:msup><mml:mi>x</mml:mi><mml:mn>2 </mml:mn></mml:msup><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math>. In different situations, suffcient, and sometimes necessary, conditions are given on the weights in order for a Carlson type inequality to hold for some constant <i>K</i>. Carlson type inequalities have applications to e.g. moment problems, Fourier analysis, optimal sampling, and interpolation theory.</p>
|
343 |
From multitarget tracking to event recognition in videosBrendel, William 12 May 2011 (has links)
This dissertation addresses two fundamental problems in computer vision—namely,
multitarget tracking and event recognition in videos. These problems are challenging
because uncertainty may arise from a host of sources, including motion blur,
occlusions, and dynamic cluttered backgrounds. We show that these challenges can be
successfully addressed by using a multiscale, volumetric video representation, and
taking into account various constraints between events offered by domain knowledge.
The dissertation presents our two alternative approaches to multitarget tracking. The
first approach seeks to transitively link object detections across consecutive video
frames by finding the maximum independent set of a graph of all object detections.
Two maximum-independent-set algorithms are specified, and their convergence
properties theoretically analyzed. The second approach hierarchically partitions the
space-time volume of a video into tracks of objects, producing a segmentation graph of
that video. The resulting tracks encode rich contextual cues between salient video parts
in space and time, and thus facilitate event recognition, and segmentation in space and
time.
We also describe our two alternative approaches to event recognition. The first
approach seeks to learn a structural probabilistic model of an event class from training
videos represented by hierarchical segmentation graphs. The graph model is then used
for inference of event occurrences in new videos. Learning and inference algorithms
are formulated within the same framework, and their convergence rates theoretically
analyzed. The second approach to event recognition uses probabilistic first-order logic
for reasoning over continuous time intervals. We specify the syntax, learning, and
inference algorithms of this probabilistic event logic.
Qualitative and quantitative results on benchmark video datasets are also presented.
The results demonstrate that our approaches provide consistent video interpretation
with respect to acquired domain knowledge. We outperform most of the state-of-the-art
approaches on benchmark datasets. We also present our new basketball dataset that
complements existing benchmarks with new challenges. / Graduation date: 2011 / Access restricted to the OSU Community at author's request from May 12, 2011 - May 12, 2012
|
344 |
Evaluating the accuracy of imputed forest biomass estimates at the project levelGagliasso, Donald 01 October 2012 (has links)
Various methods have been used to estimate the amount of above ground forest biomass across landscapes and to create biomass maps for specific stands or pixels across ownership or project areas. Without an accurate estimation method, land managers might end up with incorrect biomass estimate maps, which could lead them to make poorer decisions in their future management plans.
Previous research has shown that nearest-neighbor imputation methods can accurately estimate forest volume across a landscape by relating variables of interest to ground data, satellite imagery, and light detection and ranging (LiDAR) data. Alternatively, parametric models, such as linear and non-linear regression and geographic weighted regression (GWR), have been used to estimate net primary production and tree diameter.
The goal of this study was to compare various imputation methods to predict forest biomass, at a project planning scale (<20,000 acres) on the Malheur National Forest, located in eastern Oregon, USA. In this study I compared the predictive performance of, 1) linear regression, GWR, gradient nearest neighbor (GNN), most similar neighbor (MSN), random forest imputation, and k-nearest neighbor (k-nn) to estimate biomass (tons/acre) and basal area (sq. feet per acre) across 19,000 acres on the Malheur National Forest and 2) MSN and k-nn when imputing forest biomass at spatial scales ranging from 5,000 to 50,000 acres.
To test the imputation methods a combination of ground inventory plots, LiDAR data, satellite imagery, and climate data were analyzed, and their root mean square error (RMSE) and bias were calculated. Results indicate that for biomass prediction, the k-nn (k=5) had the lowest RMSE and least amount of bias. The second most accurate method consisted of the k-nn (k=3), followed by the GWR model, and the random forest imputation. The GNN method was the least accurate. For basal area prediction, the GWR model had the lowest RMSE and least amount of bias. The second most accurate method was k-nn (k=5), followed by k-nn (k=3), and the random forest method. The GNN method, again, was the least accurate.
The accuracy of MSN, the current imputation method used by the Malheur Nation Forest, and k-nn (k=5), the most accurate imputation method from the second chapter, were then compared over 6 spatial scales: 5,000, 10,000, 20,000, 30,000, 40,000, and 50,000 acres. The root mean square difference (RMSD) and bias were calculated for each of the spatial scale samples to determine which was more accurate. MSN was found to be more accurate at the 5,000, 10,000, 20,000, 30,000, and 40,000 acre scales. K-nn (k=5) was determined to be more accurate at the 50,000 acre scale. / Graduation date: 2013
|
345 |
Advances in magnetic resonance imaging of the human brain at 4.7 teslaLebel, Robert 11 1900 (has links)
Magnetic resonance imaging is an essential tool for assessing soft tissues. The desire for increased signal-to-noise and improved tissue contrast has spurred development of imaging systems operating at magnetic fields exceeding 3.0 Tesla (T). Unfortunately, traditional imaging methods are of limited utility on these systems. Novel imaging methods are required to exploit the potential of high field systems and enable innovative clinical studies. This thesis presents methodological advances for human brain imaging at 4.7 T. These methods are applied to assess sub-cortical gray matter in multiple sclerosis (MS) patients.
Safety concerns regarding energy deposition in the patient precludes the use of traditional fast spin echo (FSE) imaging at 4.7 T. Reduced and variable refocusing angles were employed to effectively moderate this energy deposition while maintaining high signal levels; an assortment of time-efficient FSE images are presented. Contrast changes were observed at low angles, but images maintained a clinically
useful appearance.
Heterogeneous transmit fields hinder the measurement of transverse relaxation times. A post-processing technique was developed to model the salient signal behaviour and enable accurate transverse relaxometry. This method is robust to transmit variations observed at 4.7 T and improves multislice imaging efficiency.
Gradient echo sequences can exploit the magnetic susceptibility difference between tissues to enhance contrast, but are corrupted near air/tissue interfaces. A correction method was developed and employed to reliably produce a multitude of quantitative and qualitative image sets.
Using these techniques, transverse relaxation times and susceptibility field shifts were measured in sub-cortical nuclei of relapsing-remitting MS patients. Abnormalities in the globus pallidus and pulvinar nucleus were observed in all quantitative methods; most other regions differed on one or more measures. Correlations with disease duration were not observed, reaffirming that the disease process commences prior to symptom onset.
The methods presented in this thesis enable efficient qualitative and quantitative imaging at high field strength. Unique challenges, notably patient safety and field variability, were overcome via sequence implementation and data processing. These techniques enable visualization and measurement of unique contrast mechanisms, which reveal insight into neurodegenerative diseases, including widespread sub-cortical gray matter damage in MS.
|
346 |
Stochastic Modeling and Simulation of Gene NetworksXu, Zhouyi 06 May 2010 (has links)
Recent research in experimental and computational biology has revealed the necessity of using stochastic modeling and simulation to investigate the functionality and dynamics of gene networks. However, there is no sophisticated stochastic modeling techniques and efficient stochastic simulation algorithms (SSA) for analyzing and simulating gene networks. Therefore, the objective of this research is to design highly efficient and accurate SSAs, to develop stochastic models for certain real gene networks and to apply stochastic simulation to investigate such gene networks. To achieve this objective, we developed several novel efficient and accurate SSAs. We also proposed two stochastic models for the circadian system of Drosophila and simulated the dynamics of the system. The K-leap method constrains the total number of reactions in one leap to a properly chosen number thereby improving simulation accuracy. Since the exact SSA is a special case of the K-leap method when K=1, the K-leap method can naturally change from the exact SSA to an approximate leap method during simulation if necessary. The hybrid tau/K-leap and the modified K-leap methods are particularly suitable for simulating gene networks where certain reactant molecular species have a small number of molecules. Although the existing tau-leap methods can significantly speed up stochastic simulation of certain gene networks, the mean of the number of firings of each reaction channel is not equal to the true mean. Therefore, all existing tau-leap methods produce biased results, which limit simulation accuracy and speed. Our unbiased tau-leap methods remove the bias in simulation results that exist in all current leap SSAs and therefore significantly improve simulation accuracy without sacrificing speed. In order to efficiently estimate the probability of rare events in gene networks, we applied the importance sampling technique to the next reaction method (NRM) of the SSA and developed a weighted NRM (wNRM). We further developed a systematic method for selecting the values of importance sampling parameters. Applying our parameter selection method to the wSSA and the wNRM, we get an improved wSSA (iwSSA) and an improved wNRM (iwNRM), which can provide substantial improvement over the wSSA in terms of simulation efficiency and accuracy. We also develop a detailed and a reduced stochastic model for circadian rhythm in Drosophila and employ our SSA to simulate circadian oscillations. Our simulations showed that both models could produce sustained oscillations and that the oscillation is robust to noise in the sense that there is very little variability in oscillation period although there are significant random fluctuations in oscillation peeks. Moreover, although average time delays are essential to simulation of oscillation, random changes in time delays within certain range around fixed average time delay cause little variability in the oscillation period. Our simulation results also showed that both models are robust to parameter variations and that oscillation can be entrained by light/dark circles.
|
347 |
On Some Properties of Interior Methods for OptimizationSporre, Göran January 2003 (has links)
This thesis consists of four independent papers concerningdifferent aspects of interior methods for optimization. Threeof the papers focus on theoretical aspects while the fourth oneconcerns some computational experiments. The systems of equations solved within an interior methodapplied to a convex quadratic program can be viewed as weightedlinear least-squares problems. In the first paper, it is shownthat the sequence of solutions to such problems is uniformlybounded. Further, boundedness of the solution to weightedlinear least-squares problems for more general classes ofweight matrices than the one in the convex quadraticprogramming application are obtained as a byproduct. In many linesearch interior methods for nonconvex nonlinearprogramming, the iterates can "falsely" converge to theboundary of the region defined by the inequality constraints insuch a way that the search directions do not converge to zero,but the step lengths do. In the sec ond paper, it is shown thatthe multiplier search directions then diverge. Furthermore, thedirection of divergence is characterized in terms of thegradients of the equality constraints along with theasymptotically active inequality constraints. The third paper gives a modification of the analytic centerproblem for the set of optimal solutions in linear semidefiniteprogramming. Unlike the normal analytic center problem, thesolution of the modified problem is the limit point of thecentral path, without any strict complementarity assumption.For the strict complementarity case, the modified problem isshown to coincide with the normal analytic center problem,which is known to give a correct characterization of the limitpoint of the central path in that case. The final paper describes of some computational experimentsconcerning possibilities of reusing previous information whensolving system of equations arising in interior methods forlinear programming. <b>Keywords:</b>Interior method, primal-dual interior method,linear programming, quadratic programming, nonlinearprogramming, semidefinite programming, weighted least-squaresproblems, central path. <b>Mathematics Subject Classification (2000):</b>Primary90C51, 90C22, 65F20, 90C26, 90C05; Secondary 65K05, 90C20,90C25, 90C30.
|
348 |
Carlson type inequalities and their applicationsLarsson, Leo January 2003 (has links)
This thesis treats inequalities of Carlson type, i.e. inequalities of the form <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:msub><mml:mi>∥f∥</mml:mi><mml:mi>x</mml:mi></mml:msub><mml:mo mml:stretchy="false">≤</mml:mo><mml:mi>K</mml:mi></mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∏</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:msubsup><mml:mi>∥f∥</mml:mi><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mi>i</mml:mi></mml:msub></mml:msubsup></mml:mrow></mml:mrow></mml:semantics></mml:math> where <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∑</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:munderover><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mn>i </mml:mn></mml:msub></mml:mrow><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:semantics></mml:math> and K is some constant, independent of the function f. X and <mml:math><mml:semantics><mml:msub><mml:mi>A</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:semantics></mml:math> are normed spaces, embedded in some Hausdorff topological vector space. In most cases, we have <mml:math><mml:semantics><mml:mrow><mml:mi>m</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math>, and the spaces involved are weighted Lebesgue spaces on some measure space. For example, the inequality <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∫</mml:mo><mml:mn>0</mml:mn><mml:mo mml:stretchy="false">∞</mml:mo></mml:munderover><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mrow><mml:mi mml:fontstyle="italic">dx</mml:mi><mml:mo mml:stretchy="false">≤</mml:mo><mml:msqrt><mml:mo mml:stretchy="false">π</mml:mo></mml:msqrt></mml:mrow><mml:msup><mml:mfenced mml:open="(" mml:close=")"><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∫</mml:mo><mml:mn>0</mml:mn><mml:mo mml:stretchy="false">∞</mml:mo></mml:munderover><mml:msup><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow></mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo mml:stretchy="false">/</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mfenced mml:open="(" mml:close=")"><mml:mrow><mml:mrow><mml:munderover><mml:mo mml:stretchy="false">∫</mml:mo><mml:mn>0</mml:mn><mml:mo mml:stretchy="false">∞</mml:mo></mml:munderover><mml:msup><mml:mi>x</mml:mi><mml:mn>2 </mml:mn></mml:msup></mml:mrow><mml:msup><mml:mi>f</mml:mi><mml:mn>2 </mml:mn></mml:msup><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mi>x</mml:mi><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow></mml:mfenced><mml:mrow><mml:mn>1</mml:mn><mml:mo mml:stretchy="false">/</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:semantics></mml:math> first proved by F. Carlson, is the above inequality with <mml:math><mml:semantics><mml:mrow><mml:mi>m</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math>, <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mn>1 </mml:mn></mml:msub><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mo mml:stretchy="false">θ</mml:mo><mml:mn>2 </mml:mn></mml:msub></mml:mrow><mml:mo mml:stretchy="false">=</mml:mo><mml:mfrac><mml:mn>1 </mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:semantics></mml:math>, <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:mi>X</mml:mi><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mo mml:stretchy="false">ℝ</mml:mo><mml:mrow><mml:mo mml:stretchy="false">+</mml:mo><mml:mn>, </mml:mn></mml:mrow></mml:msub><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow><mml:mn>, </mml:mn><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>1 </mml:mn></mml:msub><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mn>2 </mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mo mml:stretchy="false">ℝ</mml:mo><mml:mrow><mml:mo mml:stretchy="false">+</mml:mo><mml:mn>, </mml:mn></mml:mrow></mml:msub><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math> and <mml:math><mml:semantics><mml:mrow><mml:mrow><mml:msub><mml:mi>A</mml:mi><mml:mn>2 </mml:mn></mml:msub><mml:mo mml:stretchy="false">=</mml:mo><mml:msub><mml:mi>L</mml:mi><mml:mn>2 </mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo mml:stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mo mml:stretchy="false">ℝ</mml:mo><mml:mrow><mml:mo mml:stretchy="false">+</mml:mo><mml:mn>, </mml:mn></mml:mrow></mml:msub><mml:msup><mml:mi>x</mml:mi><mml:mn>2 </mml:mn></mml:msup><mml:mi mml:fontstyle="italic">dx</mml:mi></mml:mrow><mml:mo mml:stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math>. In different situations, suffcient, and sometimes necessary, conditions are given on the weights in order for a Carlson type inequality to hold for some constant K. Carlson type inequalities have applications to e.g. moment problems, Fourier analysis, optimal sampling, and interpolation theory.
|
349 |
Algorithmic Trading : Hidden Markov Models on Foreign Exchange DataIdvall, Patrik, Jonsson, Conny January 2008 (has links)
In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market. HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data. In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention. Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.
|
350 |
K-way Partitioning Of Signed Bipartite GraphsOmeroglu, Nurettin Burak 01 September 2012 (has links) (PDF)
Clustering is the process in which data is differentiated, classified according to some criteria. As a result of partitioning process, data is grouped into clusters for specific purpose. In a social network, clustering of people is one of the most popular problems. Therefore, we mainly concentrated on finding an efficient algorithm for this problem. In our study, data is made up of two types of entities (e.g., people, groups vs. political issues, religious beliefs) and distinct from most previous works, signed weighted bipartite graphs are used to model relations among them. For the partitioning criterion, we use the strength of the opinions between the entities. Our main intention is to partition the data into k-clusters so that entities within clusters represent strong relationship. One such example from a political domain is the opinion of people on issues. Using the signed weights on the edges, these bipartite graphs can be partitioned into two or more clusters. In political domain, a cluster represents strong relationship among a group of people and a group of issues. After partitioning, each cluster in the result set contains like-minded people and advocated issues.
Our work introduces a general mechanism for k-way partitioning of signed bipartite graphs. One of the great advantages of our thesis is that it does not require any preliminary information about the structure of the input dataset. The idea has been illustrated on real and randomly generated data and promising results have been shown.
|
Page generated in 0.0524 seconds