Spelling suggestions: "subject:"nonspatial"" "subject:"nongeospatial""
1 |
Essays on Spatial Externality and Spatial Heterogeneity in Applied Spatial EconometricsKang, Dongwoo January 2015 (has links)
This dissertation consists of three empirical essays of which contributions consist, first, in developing spatial weight matrices based on more than just pure geographical proximity for the modeling of interregional externalities. Second, my essays propose different approaches to discover spatial heterogeneity in the data generating processes, including the interregional externalities, under investigation. This dissertation provides Economic Geographers and Regional Scientists interested in the modeling and measurement of spatial externalities a set of practical examples based on new datasets and state-of-the-art spatial econometric techniques to consider for their own work. I hope my dissertation will provide them with some guidance on how various aspects of spatial externalities can be incorporated in traditional spatial weight matrices and of how much the impact of externalities can be spatially heterogeneous. The results of the dissertation should help spatial and regional policy makers to understand better various aspects of interregional dependence in regional economic systems and to devise locally effective and place-tailored spatial and regional policies. The first essay investigates the negative spatial externalities of irrigation on corn production. The spatial externalities of irrigation water are well known but have never been examined in a spatial econometric framework so far. We investigate their role in a theoretical model of profit-maximizing farming and verify our predictions empirically in a crop production function measured across US Corn Belt counties. The interregional groundwater and surface water externalities are modeled based on actual aquifer and river stream network characteristics. The second essay examines the positive spatial externalities of academic and private R&D spending in the frame of a regional knowledge production function measured across US counties. It distinguishes the role of local knowledge spillovers that are determined by geographical proximity from distant spillovers that we choose to capture through a matrix of patent creation-citation flows. The advantage of the latter matrix is its capacity to capture the technological proximity between counties as well as the direction of knowledge spillovers. These two elements have been missed in the literature so far. The last essay highlights and measures the presence of spatial heterogeneity in the marginal effect of the innovation inputs, more especially of the interregional knowledge spillovers. The literature of knowledge production function has adopted geographically aggregated units and controlled for region-specific conditions to highlight the presence of spatial heterogeneity in regional knowledge creation. However, most empirical studies have relied on a global modeling approach that measures spatially homogenous marginal effects of knowledge inputs. This essay explains the source of the heterogeneity in innovation and then measures the spatial heterogeneity in the marginal effects of knowledge spillovers as well as of other knowledge input factors across US counties. For this purpose, the nonparametric local modeling approaches of Geographically Weighted Regression (GWR) and Mixed GWR are used.
|
2 |
Spatially explicit load enrichment calculation tool and cluster analysis for identification of E. coli sources in Plum Creek Watershed, TexasTeague, Aarin Elizabeth 02 June 2009 (has links)
According to the 2004 303(d) List, 192 segments are impaired by bacteria in the State of
Texas. Impairment of streams due to bacteria is of major concern in several urban
watersheds in Texas. In order to assess, monitor and manage water quality, it is
necessary to characterize the sources of pathogens within the watershed. The objective
of this study was to develop a spatially explicit method that allocates E.coli loads in the
Plum Creek watershed in East Central Texas. A section of Plum Creek is classified as
impaired due to bacteria. The watershed contains primarily agricultural activity and is in
the midst of an urban housing boom.
Based on a stakeholder input, possible sources E. coli were first identified in the
different regions of the watershed. Locations of contributing non-point and point sources
in the watershed were defined using Geographic Information Systems (GIS). By
distributing livestock, wildlife, wastewater treatment plants, septic systems, and pet
sources, the bacterial load in the watershed was spatially characterized. Contributions
from each source were then quantified by applying source specific bacterial production
rates. The rank of each contributing source was then assessed for the entire watershed.
Cluster and discriminant analysis was then used to identify similar regions within the
watershed for assistance in selection of appropriate best management practices. The
results of the cluster analysis and the spatially explicit method were compared to identify
regions that require further refinement of the SELECT method and data inputs.
|
3 |
The Hotelling model discusses againYang, Jhen-yuan 23 June 2005 (has links)
Hotelling (1929) proposed two stages merchant location competition theory, in this article the principle of minimum difference, broke through in the traditional literature by the quantity competition model and the price competition model primarily merchant theory, in addition d'Aspremont, Gabszexise and Thisse (1979) the principle of maximum difference proposes, causes the economical educational world continuously unceasing discussion and the innovation.
This research revises Hotelling (1929) the basic supposition, after joining the merchant to have the marginal cost the supposition as well as the consumer whether voluntarily do bear the travel cost , the discussion merchant solely is leaving the plant the set price system and solely ships under the different price system, the merchant in faces lives to the position in with assigns reigns the merchant factory site and so on under the different structure, how should the merchant choose the most suitable factory site. And separately discusses under two kind of different set prices systems, the merchant had been established when what kind of condition can respectively achieve the principle of minimum difference and the principle of maximum difference , or whether the merchant most suitable position choice can have other situations.
|
4 |
Comparison of edit history clustering techniques for spatial hypertextMandal, Bikash 12 April 2006 (has links)
History mechanisms available in hypertext systems allow access to past user interactions
with the system. This helps users evaluate past work and learn from past activity. It also
allows systems identify usage patterns and potentially predict behaviors with the system.
Thus, recording history is useful to both the system and the user.
Various tools and techniques have been developed to group and annotate history in
Visual Knowledge Builder (VKB). But the problem with these tools is that the
operations are performed manually. For a large VKB history growing over a long period
of time, performing grouping operations using such tools is difficult and time
consuming. This thesis examines various methods to analyze VKB history in order to
automatically group/cluster all the user events in this history.
In this thesis, three different approaches are compared. The first approach is a pattern
matching approach identifying repeated patterns of edit events in the history. The second
approach is a rule-based approach that uses simple rules, such as group all consecutive
events on a single object. The third approach uses hierarchical agglomerative clustering
(HAC) where edits are grouped based on a function of edit time and edit location.
The contributions of this thesis work are: (a) developing tools to automatically cluster
large VKB history using these approaches, (b) analyzing performance of each approach in order to determine their relative strengths and weaknesses, and (c) answering the
question, how well do the automatic clustering approaches perform by comparing the
results obtained from this automatic tool with that obtained from the manual grouping
performed by actual users on a same set of VKB history.
Results obtained from this thesis work show that the rule-based approach performs the
best in that it best matches human-defined groups and generates the fewest number of
groups. The hierarchic agglomerative clustering approach is in between the other two
approaches with regards to identifying human-defined groups. The pattern-matching
approach generates many potential groups but only a few matches with those generated
by actual VKB users.
|
5 |
A systematic methodology toward creating spatial quality in urban settingsThomas, Derek Charles 22 September 2023 (has links) (PDF)
Urban settings, conceived and implemented in the climate of modern-day urbanisation and technology, show undesirable trends. In the typical situation, due to the absence of the urban dweller's participation in the planning and design process, prescriptive decision-making directs and shapes the urban environment on the basis of the objectives of the trained professional or a developer. The disciplines of architecture, urban design, and urban planning, well endowed with research in terms of their philosophical, cultural, and historical dimensions, traditionally overlook systematic and impartial methods in realisation of design objectives. In addition, architects generally focus within the confines of the immediate site, ignoring the wider context. Urban planners and designers tend to follow their perceptions of the urban setting and pragmatic objectives, and to overlook the elements which constitute spatial quality for others. Planning and design tasks performed in this way are prescriptive and perfunctory, and do not meet the urban dweller's perceptions of spatial quality. Although the planning and design disciplines can avail themselves of considerable intellectual resources, systematic methods to synthesise both the subjective opinion of the urban dweller and expert opinion of specialists are lacking. With current global scenarios, the need to develop methods for participation becomes even more relevant and urgent. The likelihood of high-density settings is ominous without changes in planning and design approaches. The overall objective of this thesis is to develop a methodology which meets the demands of the situations described. The data for this study are derived from a theoretical examination of the attributes which contribute to the perceptions of spatial quality in the urban setting. A thematic analysis, carried out against the background of factors, such as spatial patterning, links social well-being with characteristics of the urban environment. Consistent and invariant spatial quality indicators are derived which are then associated with spatial performance. A spatial frame is then identified to structure the methodology into recognisable and manageable urban spatial components. Expectations of spatial performance are translated systematically into primary planning and design generators to complete the elements of the methodology. The problem of how to involve urban dwellers and specialist designers and planners for a consensus useful in the planning process is examined. The comprehensive methodology developed by Sondheim for assessing environmental impacts incorporates the necessary features for adaptation to new urban settings and resolves the problem of polling divergent priorities without requiring discussion or consensus amongst participants. The matrix procedures of the chosen methodology involve both subjective and informed qualitative evaluation without the use of environmental indices, which are found wanting as measures of quality. Post-multiplication of the matrices produces ranking of planning and design generators in order of importance, which, effectively, represents the choice of the urban dweller. The methodology is operationalised to test the matrix and post-multiplication procedures, and the rationality of the result. For the case model presented, a rational result was obtained, which supports the adaptation of the methodology for creative purposes. The ranking is referred to a source book, which allows the systematic transformation of the primary planning and design generators into recognisable and conventional planning directives. As a contribution to the planning and design fields, the methodology is a useful creative tool, effectively addressing the problem of the interface between planner and user in the attainment of spatial quality in the development of new urban settings. Furthermore, the procedures can be operationalised to meet an infinite range of variables, or spatial scenarios within the urban setting.
|
6 |
Sihui_Wang_thesis.pdfSihui Wang (17522025) 01 December 2023 (has links)
<p dir="ltr">Traditionally, spatial data pertains to observations made at various spatial locations, with interpolation commonly being the central aim of such analyses. However, the relevance of this data has expanded notably to scenarios where the spatial location represents input variables, and the observed response variable embodies the model outcome, a concept applicable in arenas like computer experiments and recommender systems. Spatial prediction, pervasive across many disciplines, often employs linear prediction due to its simplicity. Kriging, originally developed in mining, engineering has found utility in diverse fields such as environmental sciences, hydrology, natural resources, remote sensing, and computer experiments, among others. In these applications, Gaussian processes have emerged as a powerful tool. Essential for kernel learning methods in machine learning, Kriging necessitates the inversion of the covariance matrix related to the observed random variables.</p><p dir="ltr">A primary challenge in spatial data analysis, in this expansive sense, is handling the large covariance matrix involved in the best linear prediction, or Kriging, and the Gaussian likelihood function. Recent studies have revealed that the covariance matrix can become ill-conditioned with increasing dimensions. This revelation underscores the need to seek alternative methodologies for analyzing extensive spatial data that avoid relying on the full covariance matrix. Although various strategies, such as covariance tapering, use of block diagonal matrices, and traditional low-rank model with perturbation, have been proposed to combat the computational hurdles linked with large spatial data, not all effectively resolve the predicament of an ill-conditioned covariance matrix.</p><p dir="ltr">In this thesis, we examine two promising strategies for the analysis of large-scale spatial data. The first is the low-rank approximation, a tactic that exists in multiple forms. Traditional low-rank models employ perturbation to handle the ill-conditioned covariance matrix but fall short in data prediction accuracy. We propose the use of a pseudo-inverse for the low-rank model as an alternative to full Kriging in handling massive spatial data. We will demonstrate that the prediction variance of the proposed low-rank model can be comparable to that of full Kriging, while offering computational cost benefits. Furthermore, our proposed low-rank model surpasses the traditional low-rank model in data interpolation. Consequently, when full Kriging is untenable due to an ill-conditioned covariance matrix, our proposed low-rank model becomes a viable alternative for interpolating large spatial data sets with high precision.</p><p dir="ltr">The second strategy involves harnessing deep learning for spatial interpolation. We explore machine learning approaches adept at modeling voluminous spatial data. Contrary to the majority of existing research that applies deep learning exclusively to model the mean function in spatial data, we concentrate on encapsulating spatial correlation. This approach harbors potential for effectively modeling non-stationary spatial phenomena. Given that Kriging is predicated on the data being influenced by an unknown constant mean, serving as the best linear unbiased predictor under this presupposition, we foresee its superior performance in stationary cases. Conversely, DeepKriging, with its intricate structure for both the mean function and spatial basis functions, exhibits enhanced performance in the realm of nonstationary data.</p>
|
7 |
The interaction of transient and enduring spatial representations: Using visual cues to maintain perceptual engagementHodgson, Eric P. 05 August 2008 (has links)
No description available.
|
8 |
A Spatial Statistical Analysis to Estimate the Spatial Dynamics of the 2009 H1N1 Pandemic in the Greater Toronto AreaFan, WENYONG 05 November 2012 (has links)
The 2009 H1N1 pandemic caused serious concerns worldwide due to the novel biological feature of the virus strain, and the high morbidity rate for youth. The urban scale is crucial for analyzing the pandemic in metropolitan areas such as the Greater Toronto Area (GTA) of Canada because of its large population. The challenge of exploring the spatial dynamics of H1N1 is exaggerated by data scarcity and the absence of an immediately applicable methodology at such a scale. In this study, a stepwise methodology is developed, and a retrospective spatial statistical analysis is conducted using the methodology to estimate the spatial dynamics of the 2009 H1N1 pandemic in the GTA when the data scarcity exists. The global and local spatial autocorrelation analyses are carried out through the use of multiple spatial analysis tools to confirm the existence and significance of spatial clustering effects. A Generalized Linear Mixed Model (GLMM) implemented in Statistical Analysis System (SAS) is used to estimate the area-specific spatial dynamics. The GLMM is configured to a spatial model that incorporates an Intrinsic Gaussian Conditionally Autoregressive (ICAR) model, and a non-spatial model respectively. Comparing the results of spatial and non-spatial configurations of the GLMM suggests that the spatial GLMM, which incorporates the ICAR model, proves a better predictability. This indicates that the methodology developed in this study can be applied to epidemiology studies to analyze the spatial dynamics in similar scenarios. / Thesis (Master, Geography) -- Queen's University, 2012-10-30 17:41:28.445
|
9 |
The business end of objects: Monitoring object orientationMello, Catherine 16 July 2009 (has links)
No description available.
|
10 |
Tempering spatial autocorrelation in the residuals of linear and generalized models by incorporating selected eigenvectorsCervantes, Juan 01 August 2018 (has links)
In order to account for spatial correlation in residuals in regression models for areal and lattice data, different disciplines have developed distinct approaches. Bayesian spatial statistics typically has used a Gaussian conditional autoregressive (CAR) prior on random effects, while geographers utilize Moran's I statistic as a measure of spatial autocorrelation and the basis for creating spatial models. Recent work in both fields has recognized and built on a common feature of the two approaches, specifically the implicit or explicit incorporation into the linear predictor of eigenvectors of a matrix representing the spatial neighborhood structure. The inclusion of appropriate choices of these vectors effectively reduces the spatial autocorrelation found in the residuals.
We begin with extensive simulation studies to compare Bayesian CAR models, Restricted Spatial Regression (RSR), Bayesian Spatial Filtering (BSF), and Eigenvector Spatial Filtering (ESF) with respect to estimation of fixed-effect coefficients, prediction, and reduction of residual spatial autocorrelation. The latter three models incorporate the neighborhood structure of the data through the eigenvectors of a Moran operator.
We propose an alternative selection algorithm for all candidate predictors that avoids the ad hoc approach of RSR and selects on both model fit and reduction of autocorrelation in the residuals. The algorithm depends on the marginal posterior density a quantity that measures what proportion of the total variance can be explained by the measurement error. The algorithm selects candidate predictors that lead to a high probability that this quantity is large in addition to having large marginal posterior inclusion probabilities (PIP) according to model fit. Two methods were constructed. The first is based on orthogonalizing all of the candidate predictors while the second can be applied to the design matrix of candidate predictors without orthogonalization.
Our algorithm was applied to the same simulated data that compared the RSR, BSF and ESF models. Although our algorithm performs similarly to the established methods, the first of our selection methods shows an improvement in execution time. In addition, our approach is a statistically sound, fully Bayesian method.
|
Page generated in 0.0428 seconds