• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 55
  • 42
  • 15
  • 14
  • 8
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 322
  • 140
  • 119
  • 119
  • 69
  • 54
  • 44
  • 39
  • 27
  • 24
  • 22
  • 22
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

A causalidade jurídica na apuração das consequências danosas na responsabilidade civil extracontratual

Magadan, Gabriel de Freitas Melro January 2016 (has links)
La présente recherche aborde l’utilisation et l’application de la causalité juridique dans l’analyse, l’extension et la délimitation des conséquences des dommages résultant de la responsabilité civile extracontractuelle subjective. La thèse propose l’individualisation des éléments, favorisant une lecture et approche nouvelles de la causalité pour la détermination d’une zone circonscrite à la réparation de dommages et pour la création d’un modèle théorique de référence pour l’application pratique dans la sélection des dommages indemnisables. Elle a été divisée en deux parties. La première aborde l’origine, la notion et le développement conceptuel de la causalité, jusqu’à l’avènement d’une causalité juridique et l’application des théories de la causalité dans la résolution de problèmes à responsabilité civile ; l’individualisation d’éléments intégrants de la causalité pour la vérification et l’application dans l’analyse de dommages. Dans la deuxième, il est question de l’utilisation de la causalité juridique dans la sélection des dommages indemnisables, de son insertion dans le code civil brésilien en matière d’indemnisation, et de la formulation d’un régime d’imputation ; les cas difficiles, les dommages par ricochet et ceux de pertes d’opportunités, en étant vérifiée la perte d’une opportunité, et la preuve de la causalité, y compris les cas de présomption. / O presente trabalho trata da utilização e aplicação da causalidade jurídica na apuração, extensão e delimitação das consequências dos danos decorrentes da responsabilidade civil extracontratual subjetiva. A tese propõe a individualização de elementos que favoreçam uma nova leitura e abordagem da causalidade com vistas à determinação de uma zona circunscrita à reparação de danos e à criação de um modelo teórico de referência para a aplicação prática na seleção dos danos indenizáveis. Está dividido em duas partes. Na primeira, tratam da origem, da noção e do desenvolvimento conceitual da causalidade, até a assunção de uma causalidade jurídica e da aplicação das teorias da causalidade na solução de problemas de responsabilidade civil; a individualização de elementos integrantes da causalidade para a verificação e a aplicação na apuração de danos. Na segunda, a utilização da causalidade jurídica na seleção dos danos indenizáveis, a sua inserção no Código Civil brasileiro, em matéria de indenização, e a formulação de um regime de imputação; os casos difíceis, os danos por ricochete e os de perda de oportunidades, verificado na perda de uma chance, e a prova da causalidade, incluindo os casos de presunção. / This work approaches the use and application of legal causality in the calculation, extent and delimitation of the consequences of damage from subjective extra-contractual liability. The thesis proposes the individualization of elements that favor a new reading and approach of causality in view of determining a restricted zone to repair damages and the creation of a theoretical model of reference for the practical application in the selection of compensable damage. It is divided into two parts. The first deals with the origin, development and conceptual notion of causality, to the assumption of a legal causality and application of causality theories in solving liability issues; individualization of integral elements of causality for the verification and application in the calculation of damages. Second, the use of legal causality in the selection of compensable damage, its insertion into the Brazilian Civil Code, relating to compensation, and the formulation of a charging system; difficult cases, rebound damages and lost opportunity damages, found with the loss of a chance, and proof of causality, including cases of presumption.
242

A (des)naturalização da pessoa jurídica: subjetividade, titularidade e atividade / The (de)naturalization of legal entity: subjectivity, legal capacity and activity

Sergio Marcos Carvalho de Ávila Negri 03 May 2011 (has links)
O presente trabalho, a partir da revisão do conceito de personificação, pretende investigar como se desenvolve o processo de naturalização da pessoa jurídica e os eventuais prejuízos decorrentes para a tutela do ser humano nas organizações sociais e para a descrição do fenômeno empresarial. Sob o prisma da filosofia da linguagem, realiza-se uma revisão bibliográfica sobre a utilização do termo pessoa jurídica no discurso do Direito, destacando, principalmente, a desconstrução promovida pelo chamado nominalismo. São, ainda, propostos critérios para a identificação da naturalização, a partir de uma gradação que procura segregar os diversos grupos de casos que lhe são correlatos. A tese foi estruturada em três etapas: subjetividade, titularidade e atividade. Ao cotejar a pessoa natural com a pessoa jurídica, em cada um desses planos, espera-se revelar a assimetria de razões que separam a personificação do ser humano daquela presente nas sociedades, associações e fundações. Do questionamento do individualismo metodológico presente na noção de pessoa jurídica resulta a reconstrução do próprio sistema analítico de conceitos do discurso jurídico, com a revisão das ideias de imputação, relação jurídica, titularidade e autonomia patrimonial. / This work, from a review of the concept of incorporation, aims to investigate how the naturalization process of legal entity develops and any losses incurred for the protection of human being in organizations and for description the phenomenon of the Firm. From the perspective of philosophy of language, this thesis reviews the literature concerning to the use of the term legal person in the discourse of corporate law, especially highlighting the deconstruction promoted by so-called nominalism. They are also proposed criteria for the identification of naturalization, with a gradation that seeks to segregate the different groups of cases that are related to this process.The thesis was structured in three stages: subjectivity, legal capacity and activity. By confronting the human being with legal entity, in each of these plans, it expects to demonstrate the specificity of the process of incorporation, which prevents any comparison with real person. The revision of methodological individualism in this idea of legal personality results in the reconstruction of the concepts of imputation, legal relationship, legal capacity and limited liability.
243

An Investigation of the Effects of Taking Remedial Math in College on Degree Attainment and College GPA Using Multiple Imputation and Propensity Score Matching

Clovis, Meghan A 28 March 2018 (has links)
Enrollment in degree-granting postsecondary institutions in the U.S. is increasing, as are the numbers of students entering academically underprepared. Students in remedial mathematics represent the largest percentage of total enrollment in remedial courses, and national statistics indicate that less than half of these students pass all of the remedial math courses in which they enroll. In response to the low pass rates, numerous studies have been conducted into the use of alternative modes of instruction to increase passing rates. Despite myriad studies into course redesign, passing rates have seen no large-scale improvement. Lacking is a thorough investigation into preexisting differences between students who do and do not take remedial math. My study examined the effect of taking remedial math courses in college on degree attainment and college GPA using a subsample of the Educational Longitudinal Study of 2002. This nonexperimental study examined preexisting differences between students who did and did not take remedial math. The study incorporated propensity score matching, a statistical analysis not commonly used in educational research, to create comparison groups of matched students using multiple covariate measures. Missing value analyses and multiple imputation procedures were also incorporated as methods for identifying and handling missing data. Analyses were conducted on both matched and unmatched groups, as well as on 12 multiply imputed data sets. Binary logistic regression analyses showed that preexisting differences between students on academic, nonacademic, and non-cognitive measures significantly predicted remedial math-taking in college. Binary logistic regression analyses also indicated that students who did not take remedial math courses in college were 1.5 times more likely to earn a degree than students who took remedial math. Linear regression analyses showed that taking remedial math had a significant negative effect on mean college GPA. Students who did not take remedial math had a higher mean GPA than students who did take remedial math. These results were consistent across unmatched groups, matched groups, and all 12 multiply imputed data sets.
244

Bayesian Cluster Analysis : Some Extensions to Non-standard Situations

Franzén, Jessica January 2008 (has links)
<p>The Bayesian approach to cluster analysis is presented. We assume that all data stem from a finite mixture model, where each component corresponds to one cluster and is given by a multivariate normal distribution with unknown mean and variance. The method produces posterior distributions of all cluster parameters and proportions as well as associated cluster probabilities for all objects. We extend this method in several directions to some common but non-standard situations. The first extension covers the case with a few deviant observations not belonging to one of the normal clusters. An extra component/cluster is created for them, which has a larger variance or a different distribution, e.g. is uniform over the whole range. The second extension is clustering of longitudinal data. All units are clustered at all time points separately and the movements between time points are modeled by Markov transition matrices. This means that the clustering at one time point will be affected by what happens at the neighbouring time points. The third extension handles datasets with missing data, e.g. item non-response. We impute the missing values iteratively in an extra step of the Gibbs sampler estimation algorithm. The Bayesian inference of mixture models has many advantages over the classical approach. However, it is not without computational difficulties. A software package, written in Matlab for Bayesian inference of mixture models is introduced. The programs of the package handle the basic cases of clustering data that are assumed to arise from mixture models of multivariate normal distributions, as well as the non-standard situations.</p>
245

Peer influence on smoking : causation or correlation?

Langenskiöld, Sophie January 2005 (has links)
In this thesis, we explore two different approaches to causal inferences. The traditional approach models the theoretical relationship between the outcome variables and their explanatory variables, i.e., the science, at the same time as the systematic differences between treated and control subjects are modeled, i.e., the assignment mechanism. The alternative approach, based on Rubin's Causal Model (RCM), makes it possible to model the science and the assignment mechanism separately in a two-step procedure. In the first step, no outcome variables are used when the assignment mechanism is modeled, the treated students are matched with similar control students using this mechanism, and the models for the science are determined. Outcome variables are only used in the second step when these pre-specified models for the science are fitted. In the first paper, we use the traditional approach to evaluate whether a husband is more prone to quit smoking when his wife quits smoking than he would have been had his wife not quit. We find evidence that this is the case, but that our analysis must rely on restrictive assumptions. In the subsequent two papers, we use the alternative RCM approach to evaluate if a Harvard freshman who does not smoke (observed potential outcome) is more prone to start smoking when he shares a suite with at least one smoker, than he would have been had he shared a suite with only smokers (missing potential outcomes). We do not find evidence that this is the case, and the small and insignificant treatment effect is robust against various assumptions that we make regarding covariate adjustments and missing potential outcomes. In contrast, we do find such evidence when we use the traditional approach previously used in the literature to evaluate peer effects relating to smoking, but the treatment effect is not robust against the assumptions that we make regarding covariate adjustments. These contrasting results in the two latter papers allow us to conclude that there are a number of advantages with the alternative RCM approach over the traditional approaches previously used to evaluate peer effects relating to smoking. Because the RCM does not use the outcome variables when the assignment mechanism is modeled, it can be re-fit repeatedly without biasing the models for the science. The assignment mechanism can then often be modeled to fit the data better and, because the models for the science can consequently better control for the assignment mechanism, they can be fit with less restrictive assumptions. Moreover, because the RCM models two distinct processes separately, the implications of the assumptions that are made on these processes become more transparent. Finally, the RCM can derive the two potential outcomes needed for drawing causal inferences explicitly, which enhances the transparency of the assumptions made with regard to the missing potential outcomes. / Diss. Stockholm : Handelshögskolan, 2006 S. 1-13: sammanfattning, s. [15]-161: 4 uppsatser
246

Comparison Of Missing Value Imputation Methods For Meteorological Time Series Data

Aslan, Sipan 01 September 2010 (has links) (PDF)
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies. In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
247

The role of families in the stratification of attainment : parental occupations, parental education and family structure in the 1990s

Playford, C. J. January 2011 (has links)
The closing decades of the 20th century have witnessed a large increase in the numbers of young people remaining in education post-16 rather than entering the labour market. Concurrently, overall educational attainment in General Certificate of Secondary Education (GCSE) qualifications in England and Wales has steadily increased since their introduction in 1988. The 1990s represent a key period of change in these trends. Some sociologists argue that processes of detraditionalisation have occurred whereby previous indicators of social inequality, such as social class, are less relevant to the transitions of young people from school to work. Sociologists from other traditions argue that inequalities persist in the stratification of educational attainment by the family backgrounds of young people but that these factors have changed during this period. This thesis is an investigation of the influence of family background factors upon GCSE attainment during the 1990s. This includes extensive statistical analysis of measures of parental occupation, parental education and family structure with gender, ethnicity, school type and housing tenure type within the Youth Cohort Study of England and Wales. These analyses include over 100,000 respondents in 6 cohorts of school leavers with the harmonisation of data from cohort 6 (1992) to the Youth Cohort Time Series for England, Wales and Scotland 1984-2002 (Croxford, Ianelli and Shapira 2007). By adding the 1992 data to existing 1990s cohorts, the statistical models fitted apply to the complete set of 1990s cohorts and are therefore able to provide insight for the whole of this period. Strong differentials by parental occupation persist throughout the 1990s and do not diminish despite the overall context of rising attainment. This relationship remains net of the other factors listed, irrespective of the measure of parental occupation or the GCSE attainment outcome measure used. This builds upon and supports previous work conducted using the Youth Cohort Study and suggests that stratification in educational attainment remains a significant factor. Gender and ethnicity remain further sources of persistent stratification in GCSE attainment. Following a discussion of the weighting system and features of the Youth Cohort Study as a dataset, a thorough investigation of missing data is included, with the results of multiply imputed datasets used to examine the potential for missing data to bias estimates. This includes a critique of these approaches in the context of survey data analysis. The findings from this investigation suggest the importance of survey data collection methods, the limitations of post-survey bias correction methods and provide a thorough investigation of the data. The analysis then develops and expands previous work by investigating variation in GCSE attainment by subjects studied, through Latent Class Analysis of YCS cohort 6 (1992). Of the four groups identified in the model, a clear division is noted between those middle-attaining groups with respect to attainment in Science and Mathematics. GCSE attainment in combinations of subjects studied is stratified particularly with respect to gender and ethnicity. This research offers new insight into the role of family background factors in GCSE attainment by subject combination.
248

Traitement des données manquantes en épidémiologie : Application de l'imputation multiple à des données de surveillance et d'enquêtes.

Héraud Bousquet, Vanina 06 April 2012 (has links) (PDF)
Le traitement des données manquantes est un sujet en pleine expansion en épidémiologie. La méthode la plus souvent utilisée restreint les analyses aux sujets ayant des données complètes pour les variables d'intérêt, ce qui peut réduire lapuissance et la précision et induire des biais dans les estimations. L'objectif de ce travail a été d'investiguer et d'appliquer une méthode d'imputation multiple à des données transversales d'enquêtes épidémiologiques et de systèmes de surveillance de maladies infectieuses. Nous avons présenté l'application d'une méthode d'imputation multiple à des études de schémas différents : une analyse de risque de transmission du VIH par transfusion, une étude cas-témoins sur les facteurs de risque de l'infection à Campylobacter et une étude capture-recapture estimant le nombre de nouveaux diagnostics VIH chez les enfants. A partir d'une base de données de surveillance de l'hépatite C chronique (VHC), nous avons réalisé une imputation des données manquantes afind'identifier les facteurs de risque de complications hépatiques graves chez des usagers de drogue. A partir des mêmes données, nous avons proposé des critères d'application d'une analyse de sensibilité aux hypothèses sous-jacentes àl'imputation multiple. Enfin, nous avons décrit l'élaboration d'un processus d'imputation pérenne appliqué aux données du système de surveillance du VIH et son évolution au cours du temps, ainsi que les procédures d'évaluation et devalidation.Les applications pratiques présentées nous ont permis d'élaborer une stratégie de traitement des données manquantes, incluant l'examen approfondi de la base de données incomplète, la construction du modèle d'imputation multiple, ainsi queles étapes de validation des modèles et de vérification des hypothèses.
249

Analyses bioinformatiques dans le cadre de la génomique du SIDA

Coulonges, Cédric 16 December 2011 (has links) (PDF)
Les technologies actuelles permettent d'explorer le génome entier pour y découvrir des variants génétiques associés aux maladies. Cela implique des outils bioinformatiques adaptés à l'interface de l'informatique, des statistiques et de la biologie. Ma thèse a porté sur l'exploitation bioinformatique des données génomiques issues de la cohorte GRIV du SIDA et du projet international IHAC (International HIV Acquisition Consortium). Posant les prémices de l'imputation, j'ai d'abord développé le logiciel SUBHAP. Notre équipe a montré que la région HLA était essentielle dans la non progression et le contrôle de la charge virale et cela m'a conduit à étudier le phénotype non-progresseur non " elite ". J'ai ainsi révélé un variant du gène CXCR6 qui, en dehors du HLA, est le seul résultat identifié par approche génome-entier et répliqué. L'imputation des données du projet IHAC (10000 patients infectés et 15000 contrôles) a été réalisée et des premières associations sont en cours d'exploration.
250

資料採礦中之模型選取

孫莓婷 Unknown Date (has links)
有賴電腦的輔助,企業或組織內部所存放的資料量愈來愈多,加速資料量擴大的速度。但是大量的資料帶來的未必是大量的知識,即使擁有功能強大的資料庫系統,倘若不對資料作有意義的分析與推論,再大的資料庫也只是存放資料的空間。過去企業或組織只把資料庫當作查詢系統,並不知道可以藉由資料庫獲取有價值的資訊,而其中資料庫的內容完整與否更是重要。由於企業所擁有的資料庫未必健全,雖然擁有龐大資料庫,但是其中資訊未必足夠。我們認為利用資料庫加值方法:插補方法、抽樣方法、模型評估等步驟,以達到擴充資訊的目的,應該可以在不改變原始資料結構之下增加資料庫訊息。 本研究主要在比較不同階段的資料經過加值動作後,是否還能與原始資料結構一致。研究架構大致分成三個主要流程,包括迴歸模型、羅吉斯迴歸模型與決策樹C5.0。經過不同階段的資料加值後,我們所獲得的結論為在迴歸模型為主要流程之下,利用迴歸為主的插補方法可以使加值後的資料庫較貼近原始資料,若想進一步採用抽樣方法縮減資料量,系統抽樣所獲得的結果會比利用簡單隨機抽樣來的好。而在決策樹C5.0的主要流程下,以類神經演算法作為插補的主要方法,在提增資訊量的同時,也使插補後的資料更接近原始資料。關於羅吉斯迴歸模型,由於間斷型變數的類別比例差異過大,致使此流程無法達到有效結論。 經由實證分析可以瞭解不同的配模方式,表現較佳的資料庫加值技術也不盡相同,但是與未插補的資料庫相比較,利用資料庫加值技術的確可以增加資訊量,使加值後的虛擬資料庫更貼近原始資料結構。 / With the fast pace of advancement in computer technology, computers have the capacity to store huge amount of data. The abundance of the data, without its proper treatment, does not necessary mean having valuable information on hand. As such, a large database system can merely serve as ways of accessing and storing. Keeping this in mind, we would like to focus on the integrity of the database. We adapt the methods where the missing values are imputed and added while leaving the data structure unmodified. The interest of this paper is to find out when the data are post value added using three different imputation methods, namely regression analysis, logistic regression analysis and C5.0 decision tree, which of the methods could provide the most consistent and resemblance value-added database to the original one. The results this paper has obtained are as the followings. The regression method, after imputation of the added value, produced the closer database structure to the original one. And in the case of having large amount of data where the smaller size of data is desired, then the systematic sampling provides a better outcome than the simple random sampling. The C5.0 decision tree method provides similar result as with the regression method. Finally with respect to the logistic regression analysis, the ratio of each class in the discrete variables is out of proportion, thereby making it difficult to make a reasonable conclusion. After going through the above studies, we have found that although the results from three different methods give slight different outcomes, one thing stands out and that is using the technique of value-added database could actually improve the authentic of the original database.

Page generated in 0.0749 seconds