• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 12
  • 10
  • 9
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Moderní statistické postupy ve vyhodnocování pevnosti betonu v tlaku v konstrukcích prostřednictvím tvrdoměrných zkoušek / Modern statistical approach in evaluating the compressive strength of concrete in structures using the rebound hammer method

Janka, Marek January 2022 (has links)
This diploma thesis examines various linear regression methods and their use to establish regression relationships between the compressive strength of concrete determined by the indirect method and by the crushing of the specimens in the press. It deals mainly with the uncertainty of values measured by the indirect method, which is neglected by the usually used ordinary least squares regression method. It also deals with the weighted least squares method, suitable for so-called heteroskedastic data. It compares different regression methods on several sets of previously measured data. The final part of the work examines the effect of removing too influential points identified by Cook's distance, which may skew the regression results.
22

Proportional income taxation and heterogeneous labour supply responses : A study of gender-based heterogeneity in extensive margin labour supply decisions in response to changes in proportional income taxation in Swedish municipalities from 1960 to 1990

Syrén, Elliott January 2022 (has links)
This thesis is, to my knowledge, the first study utilising data from the Swedish population and housing censuses between 1960 and 1990 merged with other data from the same period in order to estimate extensive margin labour supply responses to changes in municipal tax rate changes. Given that women historically have not faced the same structural labour market preconditions as men, the empirical strategy is designed to allow for an analysis of gender-based heterogeneity in labour supply responses. Using a weighted fixed effects framework, estimates of the average over time between municipal effects of tax rate increases are presented. Using the preferred main model specification, the estimate for the average tax rate elasticity is -0.165 for men and 0.3513 for women. Additionally, an attempt is made to estimate an effect using a difference-in-difference framework, treating the overall largest municipal tax rate changes as a form of quasi-experimental treatment. The results of the main analysis indicate the presence of gender-based heterogeneity in extensive margin labour supply responses during 1960 to 1990 within the administrative region in question.
23

Estimation de paramètres pour des processus autorégressifs à bifurcation

Blandin, Vassili 26 June 2013 (has links)
Les processus autorégressifs à bifurcation (BAR) ont été au centre de nombreux travaux de recherche ces dernières années. Ces processus, qui sont l'adaptation à un arbre binaire des processus autorégressifs, sont en effet d'intérêt en biologie puisque la structure de l'arbre binaire permet une analogie aisée avec la division cellulaire. L'objectif de cette thèse est l'estimation les paramètres de variantes de ces processus autorégressifs à bifurcation, à savoir les processus BAR à valeurs entières et les processus BAR à coefficients aléatoires. Dans un premier temps, nous nous intéressons aux processus BAR à valeurs entières. Nous établissons, via une approche martingale, la convergence presque sûre des estimateurs des moindres carrés pondérés considérés, ainsi qu'une vitesse de convergence de ces estimateurs, une loi forte quadratique et leur comportement asymptotiquement normal. Dans un second temps, on étudie les processus BAR à coefficients aléatoires. Cette étude permet d'étendre le concept de processus autorégressifs à bifurcation en généralisant le côté aléatoire de l'évolution. Nous établissons les mêmes résultats asymptotiques que pour la première étude. Enfin, nous concluons cette thèse par une autre approche des processus BAR à coefficients aléatoires où l'on ne pondère plus nos estimateurs des moindres carrés en tirant parti du théorème de Rademacher-Menchov. / Bifurcating autoregressive (BAR) processes have been widely investigated this past few years. Those processes, which are an adjustment of autoregressive processes to a binary tree structure, are indeed of interest concerning biology since the binary tree structure allows an easy analogy with cell division. The aim of this thesis is to estimate the parameters of some variations of those BAR processes, namely the integer-valued BAR processes and the random coefficients BAR processes. First, we will have a look to integer-valued BAR processes. We establish, via a martingale approach, the almost sure convergence of the weighted least squares estimators of interest, together with a rate of convergence, a quadratic strong law and their asymptotic normality. Secondly, we study the random coefficients BAR processes. The study allows to extend the principle of bifurcating autoregressive processes by enlarging the randomness of the evolution. We establish the same asymptotic results as for the first study. Finally, we conclude this thesis with an other approach of random coefficient BAR processes where we do not weight our least squares estimators any more by making good use of the Rademacher-Menchov theorem.
24

Estimação de estado: a interpretação geométrica aplicada ao processamento de erros grosseiros em medidas / Study of systems with optical orthogonal multicarrier and consistent

Breno Elias Bretas de Carvalho 22 March 2013 (has links)
Este trabalho foi proposto com o objetivo de implementar um programa computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência (SEP) e aplicar métodos alternativos para o processamento de erros grosseiros (EGs), baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. Através da interpretação geométrica, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) e BRETAS et al. (2013) demonstraram matematicamente que o erro da medida se compõe de componentes detectáveis e não detectáveis, e ainda que a componente detectável do erro é exatamente o resíduo da medida. As metodologias até então utilizadas, para o processamento de EGs, consideram apenas a componente detectável do erro, e como consequência, podem falhar. Na tentativa de contornar essa limitação, e baseadas nos trabalhos citados previamente, foram estudadas e implementadas duas metodologias alternativas para processar as medidas portadoras de EGs. A primeira, é baseada na análise direta das componentes dos erros das medidas; a segunda, de forma similar às metodologias tradicionais, é baseada na análise dos resíduos das medidas. Entretanto, o diferencial da segunda metodologia proposta reside no fato de não considerarmos um valor limiar fixo para a detecção de medidas com EGs. Neste caso, adotamos um novo valor limiar (TV, do inglês: Threshold Value), característico de cada medida, como apresentado no trabalho de PIERETI (2011). Além disso, com o intuito de reforçar essa teoria, é proposta uma forma alternativa para o cálculo destes valores limiares, através da análise da geometria da função densidade de probabilidade da distribuição normal multivariável, referente aos resíduos das medidas. / This work was proposed with the objective of implementing a computer program to estimate the states (complex nodal voltages) in an electrical power system (EPS) and apply alternative methods for processing gross errors (GEs), based on the geometrical interpretation of the measurements errors and the innovation concept. Through the geometrical interpretation, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) and BRETAS et al. (2013) proved mathematically that the measurement error is composed of detectable and undetectable components, and also showed that the detectable component of the error is exactly the residual of the measurement. The methods hitherto used, for processing GEs, consider only the detectable component of the error, then as a consequence, may fail. In an attempt to overcome this limitation, and based on the works cited previously, were studied and implemented two alternative methodologies for process measurements with GEs. The first one is based on the direct analysis of the components of the errors of the measurements, the second one, in a similar way to the traditional methods, is based on the analysis of the measurements residuals. However, the differential of the second proposed methodology lies in the fact that it doesn\'t consider a fixed threshold value for detecting measurements with GEs. In this case, we adopted a new threshold value (TV ) characteristic of each measurement, as presented in the work of PIERETI (2011). Furthermore, in order to reinforce this theory, we propose an alternative way to calculate these thresholds, by analyzing the geometry of the probability density function of the multivariate normal distribution, relating to the measurements residuals.
25

Estimação de estado: a interpretação geométrica aplicada ao processamento de erros grosseiros em medidas / Study of systems with optical orthogonal multicarrier and consistent

Carvalho, Breno Elias Bretas de 22 March 2013 (has links)
Este trabalho foi proposto com o objetivo de implementar um programa computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência (SEP) e aplicar métodos alternativos para o processamento de erros grosseiros (EGs), baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. Através da interpretação geométrica, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) e BRETAS et al. (2013) demonstraram matematicamente que o erro da medida se compõe de componentes detectáveis e não detectáveis, e ainda que a componente detectável do erro é exatamente o resíduo da medida. As metodologias até então utilizadas, para o processamento de EGs, consideram apenas a componente detectável do erro, e como consequência, podem falhar. Na tentativa de contornar essa limitação, e baseadas nos trabalhos citados previamente, foram estudadas e implementadas duas metodologias alternativas para processar as medidas portadoras de EGs. A primeira, é baseada na análise direta das componentes dos erros das medidas; a segunda, de forma similar às metodologias tradicionais, é baseada na análise dos resíduos das medidas. Entretanto, o diferencial da segunda metodologia proposta reside no fato de não considerarmos um valor limiar fixo para a detecção de medidas com EGs. Neste caso, adotamos um novo valor limiar (TV, do inglês: Threshold Value), característico de cada medida, como apresentado no trabalho de PIERETI (2011). Além disso, com o intuito de reforçar essa teoria, é proposta uma forma alternativa para o cálculo destes valores limiares, através da análise da geometria da função densidade de probabilidade da distribuição normal multivariável, referente aos resíduos das medidas. / This work was proposed with the objective of implementing a computer program to estimate the states (complex nodal voltages) in an electrical power system (EPS) and apply alternative methods for processing gross errors (GEs), based on the geometrical interpretation of the measurements errors and the innovation concept. Through the geometrical interpretation, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) and BRETAS et al. (2013) proved mathematically that the measurement error is composed of detectable and undetectable components, and also showed that the detectable component of the error is exactly the residual of the measurement. The methods hitherto used, for processing GEs, consider only the detectable component of the error, then as a consequence, may fail. In an attempt to overcome this limitation, and based on the works cited previously, were studied and implemented two alternative methodologies for process measurements with GEs. The first one is based on the direct analysis of the components of the errors of the measurements, the second one, in a similar way to the traditional methods, is based on the analysis of the measurements residuals. However, the differential of the second proposed methodology lies in the fact that it doesn\'t consider a fixed threshold value for detecting measurements with GEs. In this case, we adopted a new threshold value (TV ) characteristic of each measurement, as presented in the work of PIERETI (2011). Furthermore, in order to reinforce this theory, we propose an alternative way to calculate these thresholds, by analyzing the geometry of the probability density function of the multivariate normal distribution, relating to the measurements residuals.
26

Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables

Katsikatsou, Myrsini January 2013 (has links)
The estimation of latent variable models with ordinal and continuous, or ranking variables is the research focus of this thesis. The existing estimation methods are discussed and a composite likelihood approach is developed. The main advantages of the new method are its low computational complexity which remains unchanged regardless of the model size, and that it yields an asymptotically unbiased, consistent, and normally distributed estimator. The thesis consists of four papers. The first one investigates the two main formulations of the unrestricted Thurstonian model for ranking data along with the corresponding identification constraints. It is found that the extra identifications constraints required in one of them lead to unreliable estimates unless the constraints coincide with the true values of the fixed parameters. In the second paper, a pairwise likelihood (PL) estimation is developed for factor analysis models with ordinal variables. The performance of PL is studied in terms of bias and mean squared error (MSE) and compared with that of the conventional estimation methods via a simulation study and through some real data examples. It is found that the PL estimates and standard errors have very small bias and MSE both decreasing with the sample size, and that the method is competitive to the conventional ones. The results of the first two papers lead to the next one where PL estimation is adjusted to the unrestricted Thurstonian ranking model. As before, the performance of the proposed approach is studied through a simulation study with respect to relative bias and relative MSE and in comparison with the conventional estimation methods. The conclusions are similar to those of the second paper. The last paper extends the PL estimation to the whole structural equation modeling framework where data may include both ordinal and continuous variables as well as covariates. The approach is demonstrated through an example run in R software. The code used has been incorporated in the R package lavaan (version 0.5-11).
27

Weighted Least Squares Kinetic Upwind Method Using Eigendirections (WLSKUM-ED)

Arora, Konark 11 1900 (has links)
Least Squares Kinetic Upwind Method (LSKUM), a grid free method based on kinetic schemes has been gaining popularity over the conventional CFD methods for computation of inviscid and viscous compressible flows past complex configurations. The main reason for the growth of popularity of this method is its ability to work on any point distribution. The grid free methods do not require the grid for flow simulation, which is an essential requirement for all other conventional CFD methods. However, they do require point distribution or a cloud of points. Point generation is relatively simple and less time consuming to generate as compared to grid generation. There are various methods for point generation like an advancing front method, a quadtree based point generation method, a structured grid generator, an unstructured grid generator or a combination of above, etc. One of the easiest ways of point generation around complex geometries is to overlap the simple point distributions generated around individual constituent parts of the complex geometry. The least squares grid free method has been successfully used to solve a large number of flow problems over the years. However, it has been observed that some problems are still encountered while using this method on point distributions around complex configurations. Close analysis of the problems have revealed that bad connectivity of the nodes is the cause and this leads to bad connectivity related code divergence. The least squares (LS) grid free method called LSKUM involves discretization of the spatial derivatives using the least squares approach. The formulae for the spatial derivatives are obtained by minimizing the sum of the squares of the error, leading to a system of linear algebraic equations whose solution gives us the formulae for the spatial derivatives. The least squares matrix A for 1-D and 2-D cases respectively is given by (Refer PDF File for equation) The 1-D LS formula for the spatial derivatives is always well behaved in the sense that ∑∆xi2 can never become zero. In case of 2-D problems can arise. It is observed that the elements of the Ls matrix A are functions of the coordinate differentials of the nodes in the connectivity. The bad connectivity of a node thus can have an adverse effect on the nature of the LS matrices. There are various types of bad connectivities for a node like insufficient number of nodes in the connectivity, highly anisotropic distribution of nodes in the connectivity stencil, the nodes falling nearly on a line (or a plane in 3-D), etc. In case of multidimensions, the case of all nodes in a line will make the matrix A singular thereby making its inversion impossible. Also, an anisotropic distribution of nodes in the connectivity can make the matrix A highly illconditioned thus leading to either loss in accuracy or code divergence. To overcome this problem, the approach followed so far is to modify the connectivity by including more neighbours in the connectivity of the node. In this thesis, we have followed a different approach of using weights to alter the nature of the LS matrix A. (Refer PDF File for equation) The weighted LS formulae for the spatial derivatives in 1-D and 2-D respectively are are all positive. So we ask a question : Can we reduce the multidimensional LS formula for the derivatives to the 1-D type formula and make use of the advantages of 1-D type formula in multidimensions? Taking a closer look at the LS matrices, we observe that these are real and symmetric matrices with real eigenvalues and a real and distinct set of eigenvectors. The eigenvectors of these matrices are orthogonal. Along the eigendirections, the corresponding LS formulae reduce to the 1-D type formulae. But a problem now arises in combining the eigendirections along with upwinding. Upwinding, which in LS is done by stencil splitting, is essential to provide stability to the numerical scheme. It involves choosing a direction for enforcing upwinding. The stencil is split along the chosen direction. But it is not necessary that the chosen direction is along one of the eigendirections of the split stencil. Thus in general we will not be able to use the 1-D type formulae along the chosen direction. This difficulty has been overcome by the use of weights leading to WLSKUM-ED (Weighted Least Squares Kinetic Upwind Method using Eigendirections). In WLSKUM-ED weights are suitably chosen so that a chosen direction becomes an eigendirection of A(w). As a result, the multi-dimensional LS formulae reduce to 1-D type formulae along the eigendirections. All the advantages of the 1-D LS formuale can thus be made use of even in multi-dimensions. A very simple and novel way to calculate the positive weights, utilizing the coordinate differentials of the neighbouring nodes in the connectivity in 2-D and 3-D, has been developed for the purpose. This method is based on the fact that the summations of the coordinate differentials are of different signs (+ or -) in different quadrants or octants of the split stencil. It is shown that choice of suitable weights is equivalent to a suitable decomposition of vector space. The weights chosen either fully diagonalize the least squares matrix ie. decomposing the 3D vector space R3 as R3 = e1 + e2 + e3, where e1, e2and e3are the eigenvectors of A (w) or the weights make the chosen direction the eigendirection ie. decomposing the 3D vector space R3 as R3 = e1 + ( 2-D vector space R2). The positive weights not only prevent the denominator of the 1-D type LS formulae from going to zero, but also preserve the LED property of the least squares method. The WLSKUM-ED has been successfully applied to a large number of 2-D and 3-D test cases in various flow regimes for a variety of point distributions ranging from a simple cloud generated from a structured grid generator (shock reflection problem in 2-D and the supersonic flow past hemisphere in 3-D) to the multiple chimera clouds generated from multiple overlapping meshes (BI-NACA test case in 2-D and FAME cloud for M165 configuration in 3-D) thus demonstrating the robustness of the WLSKUM-ED solver. It must be noted that the second order acccurate computations using this method have been performed without the use of the limiters in all the flow regimes. No spurious oscillations and wiggles in the captured shocks have been observed, indicating the preservation of the LED property of the method even for 2ndorder accurate computations. The convergence acceleration of the WLSKUM-ED code has been achieved by the use of LUSGS method. The use of 1-D type formulae has simplified the application of LUSGS method in the grid-free framework. The advantage of the LUSGS method is that the evaluation and storage of the jacobian matrices can be eliminated by approximating the split flux jacobians in the implicit operator itself. Numerical results reveal the attainment of a speed up of four by using the LUSGS method as compared to the explicit time marching method. The 2-D WLSKUM-ED code has also been used to perform the internal flow computations. The internal flows are the flows which are confined within the boundaries. The inflow and the outflow boundaries have a significant effect on these flows. The accurate treatment of these boundary conditions is essential particularly if the flow condition at the outflow boundary is subsonic or transonic. The Kinetic Periodic Boundary Condition (KPBC) which has been developed to enable the single-passage (SP) flow computations to be performed in place of the multi-passage (MP) flow computations, utilizes the moment method strategy. The state update formula for the points at the periodic boundaries is identical to the state update formula for the interior points and can be easily extended to second order accuracy like the interior points. Numerical results have shown the successful reproduction of the MP flow computation results using the SP flow computations by the use of KPBC. The inflow and the outflow boundary conditions at the respective boundaries have been enforced by the use of Kinetic Outer Boundary Condition (KOBC). These boundary conditions have been validated by performing the flow computations for the 3rdtest case of the 4thstandard blade configuration of the turbine blade. The numerical results show a good comparison with the experimental results.
28

電路設計中電流值之罕見事件的統計估計探討 / A study of statistical method on estimating rare event in IC Current

彭亞凌, Peng, Ya Ling Unknown Date (has links)
距離期望值4至6倍標準差以外的罕見機率電流值,是當前積體電路設計品質的關鍵之一,但隨著精確度的標準提升,實務上以蒙地卡羅方法模擬電路資料,因曠日廢時愈發不可行,而過去透過參數模型外插估計或迴歸分析方法,也因變數蒐集不易、操作電壓減小使得電流值尾端估計產生偏差,上述原因使得尾端電流值估計困難。因此本文引進統計方法改善罕見機率電流值的估計:先以Box-Cox轉換觀察值為近似常態,改善尾端分配值的估計,再以加權迴歸方法估計罕見電流值,其中迴歸解釋變數為Log或Z分數轉換的經驗累積機率,而加權方法採用Down-weight加重極值樣本資訊的重要性,此外,本研究也考慮能蒐集完整變數的情況,改以電路資料作為解釋變數進行加權迴歸。另一方面,本研究也採用極值理論作為估計方法。 本文先以電腦模擬評估各方法的優劣,假設母體分配為常態、T分配、Gamma分配,以均方誤差作為衡量指標,模擬結果驗證了加權迴歸方法的可行性。而後參考模擬結果決定篩選樣本方式進行實證研究,資料來源為新竹某科技公司,實證結果顯示加權迴歸配合Box-Cox轉換能以十萬筆樣本數,準確估計左、右尾機率10^(-4) 、10^(-5)、10^(-6)、10^(-7)極端電流值。其中右尾部分的加權迴歸解釋變數採用對數轉換,而左尾部分的加權迴歸解釋變數採用Z分數轉換,估計結果較為準確,又若能蒐集電路資訊作為解釋變數,在左尾部份可以有最準確的估計結果;而篩選樣本尾端1%和整筆資料的方式對於不同方法的估計準確度各有利弊,皆可考慮。另外,1%門檻值比例的極值理論能穩定且中等程度的估計不同電壓下的電流值,且有短程估計最準的趨勢。 / To obtain the tail distribution of current beyond 4 to 6 sigma is nowadays a key issue in integrated circuit (IC) design and computer simulation is a popular tool to estimate the tail values. Since creating rare events via simulation is time-consuming, often the linear extrapolation methods (such as regression analysis) are applied to enhance efficiency. However, it is shown from past work that the tail values is likely to behave differently if the operating voltage is getting lower. In this study, a statistical method is introduced to deal with the lower voltage case. The data are evaluated via the Box-Cox (or power) transformation and see if they need to be transformed into normally distributed data, following by weighted regression to extrapolate the tail values. In specific, the independent variable is the empirical CDF with logarithm or z-score transformation, and the weight is down-weight in order to emphasize the information of extreme values observations. In addition to regression analysis, Extreme Value Theory (EVT) is also adopted in the research. The computer simulation and data sets from a famous IC manufacturer in Hsinchu are used to evaluate the proposed method, with respect to mean squared error. In computer simulation, the data are assumed to be generated from normal, student t, or Gamma distribution. For empirical data, there are 10^8 observations and tail values with probabilities 10^(-4),10^(-5),10^(-6),10^(-7) are set to be the study goal given that only 10^5 observations are available. Comparing to the traditional methods and EVT, the proposed method has the best performance in estimating the tail probabilities. If the IC current is produced from regression equation and the information of independent variables can be provided, using the weighted regression can reach the best estimation for the left-tailed rare events. Also, using EVT can also produce accurate estimates provided that the tail probabilities to be estimated and the observations available are on the similar scale, e.g., probabilities 10^(-5)~10^(-7) vs.10^5 observations.
29

Modelling water droplet movement on a leaf surface

Oqielat, Moa'ath Nasser January 2009 (has links)
The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.
30

County level suicide rates and social integration: urbanicity and its role in the relationship

Walker, Jacob Travis 05 May 2007 (has links)
This study adds to the existing research concerning ecological relationships between suicide rates, social integration, and urbanicity in the U.S. Age-sex-race adjusted five-year averaged suicide rates for 1993-1997 and various measures of urbanicity are used. Some proposed relationships held true, while others indicate that social integration and urbanicity are so intertwined in their effects on suicide that no clear, unidirectional pattern emerges. The religious affiliation measure captured unique variations in the role religion plays in this relationship; depending on how urbanicity was measured. Findings suggest closer attention needs to be paid to how both urbanicity and religious affiliation are measured. Overall, vast regional variation exists in suicide rates and the role of urbanization can be misunderstood if not properly specified.

Page generated in 0.0519 seconds