• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 55
  • 42
  • 15
  • 14
  • 8
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 324
  • 141
  • 120
  • 119
  • 70
  • 54
  • 44
  • 40
  • 27
  • 24
  • 22
  • 22
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Evaluation économique des aires marines protégées : apports méthodologiques et applications aux îles Kuriat (Tunisie) / Economic valuation of marine protected areas : methodological perspectives and empirical applications to Kuriat Islands (Tunisia)

Mbarek, Marouene 16 December 2016 (has links)
La protection des ressources naturelles marines est un enjeu fort pour les décideurs publics. Le développement récent des aires marines protégées (AMP) contribue à ces enjeux de préservation. Les AMP ont pour objectifs de conserver les écosystèmes marins et côtiers tout en favorisant les activités humaines. La complexité de ces objectifs les rend difficiles à atteindre. L’objectif de cette thèse est de mener une analyse ex ante d’un projet d’une AMP aux îles Kuriat (Tunisie). Cette analyse représente une aide aux décideurs pour une meilleure gouvernance en intégrant les acteurs impliqués (pêcheur, visiteur, plaisancier) dans le processus de gestion. Pour ce faire, nous appliquons la méthode d’évaluation contingente (MEC) à des échantillons des pêcheurs et des visiteurs aux îles Kuriat. Nous nous intéressons au traitement des biais de sélection et d’échantillonnage et à l’incertitude sur la spécification des modèles économétriques lors de la mise en œuvre de la MEC. Nous faisons appel au modèle HeckitBMA,qui est une combinaison du modèle de Heckman (1979) et de l’inférence bayésienne, pour calculer le consentement à recevoir des pêcheurs. Nous utilisons aussi le modèle Zero inflated ordered probit (ZIOP), qui est une combinaison d’un probit binaire avec un probit ordonné, pour calculer le consentement à payer des visiteurs après avoir corrigé l’échantillon par imputation multiple. Nos résultats montrent que les groupes d’acteurs se distinguent par leur activité et leur situation économique ce qui les amène à avoir des perceptions différentes. Cela permet aux décideurs d’élaborer une politique de compensation permettant d’indemniser les acteurs ayant subi un préjudice. / The protection of marine natural resources is a major challenge for policy makers. The recent development of marine protected areas (MPAs) contributes to the preservation issues. MPAs are aimed to preserve the marine and coastal ecosystems while promoting human activities. The complexity of these objectives makes them difficult to reach. The purpose of this work is to conduct an ex-ante analysis of a proposed MPA to Kuriat Islands (Tunisia). This analysis is an aid to decision makers for better governance by integrating the actors involved (fisherman, visitor, boater) in the management process. To do this, we use the contingent valuation method (CVM) to samples of fishermen and visitors to the islands Kuriat. We are interested in the treatment of selection and sampling bias and uncertainty about specifying econometric models during the implementation of the CVM. We use the model HeckitBMA, which is a combination of the Heckman model (1979) and Bayesian inference, to calculate the willingness to accept of fishermen. We also use the model Zero inflated ordered probit (ZIOP), which is a combination of a binary probit with an ordered probit, to calculate the willingness to pay of visitors after correcting the sample by multiple imputation. Our results show that groups of actors are distinguished by their activity and economic conditions that cause them to have different perceptions. This allows policy makers to develop a policy of compensation to compensate the players who have been harmed.
282

Analysis of survey data in the presence of non-ignorable missing-data and selection mechanisms

Hammon, Angelina 04 July 2023 (has links)
Diese Dissertation beschäftigt sich mit Methoden zur Behandlung von nicht-ignorierbaren fehlenden Daten und Stichprobenverzerrungen – zwei häufig auftretenden Problemen bei der Analyse von Umfragedaten. Beide Datenprobleme können die Qualität der Analyseergebnisse erheblich beeinträchtigen und zu irreführenden Inferenzen über die Population führen. Daher behandle ich innerhalb von drei verschiedenen Forschungsartikeln, Methoden, die eine Durchführung von sogenannten Sensitivitätsanalysen in Bezug auf Missing- und Selektionsmechanismen ermöglichen und dabei auf typische Survey-Daten angewandt werden können. Im Rahmen des ersten und zweiten Artikels entwickele ich Verfahren zur multiplen Imputation von binären und ordinal Mehrebenen-Daten, welche es zulassen, einen potenziellen Missing Not at Random (MNAR) Mechanismus zu berücksichtigen. In unterschiedlichen Simulationsstudien konnte bestätigt werden, dass die neuen Imputationsmethoden in der Lage sind, in allen betrachteten Szenarien unverzerrte sowie effiziente Schätzungen zuliefern. Zudem konnte ihre Anwendbarkeit auf empirische Daten aufgezeigt werden. Im dritten Artikel untersuche ich ein Maß zur Quantifizierung und Adjustierung von nicht ignorierbaren Stichprobenverzerrungen in Anteilswerten, die auf der Basis von nicht-probabilistischen Daten geschätzt wurden. Es handelt sich hierbei um die erste Anwendung des Index auf eine echte nicht-probabilistische Stichprobe abseits der Forschergruppe, die das Maß entwickelt hat. Zudem leite ich einen allgemeinen Leitfaden für die Verwendung des Index in der Praxis ab und validiere die Fähigkeit des Maßes vorhandene Stichprobenverzerrungen korrekt zu erkennen. Die drei vorgestellten Artikel zeigen, wie wichtig es ist, vorhandene Schätzer auf ihre Robustheit hinsichtlich unterschiedlicher Annahmen über den Missing- und Selektionsmechanismus zu untersuchen, wenn es Hinweise darauf gibt, dass die Ignorierbarkeitsannahme verletzt sein könnte und stellen erste Lösungen zur Umsetzung bereit. / This thesis deals with methods for the appropriate handling of non-ignorable missing data and sample selection, which are two common challenges of survey data analysis. Both issues can dramatically affect the quality of analysis results and lead to misleading inferences about the population. Therefore, in three different research articles, I treat methods for the performance of so-called sensitivity analyses with regards to the missing data and selection mechanism that are usable with typical survey data. In the first and second article, I provide novel procedures for the multiple imputation of binary and ordinal multilevel data that are supposed to be Missing not At Random (MNAR). The methods’ suitability to produce unbiased and efficient estimates could be demonstrated in various simulation studies considering different data scenarios. Moreover, I could show their applicability to empirical data. In the third article, I investigate a measure to quantify and adjust non-ignorable selection bias in proportions estimated based on non-probabilistic data. In doing so, I provide the first application of the suggested index to a real non-probability sample outside its original research group. In addition, I derive general guidelines for its usage in practice, and validate the measure’s performance in properly detecting selection bias. The three presented articles highlight the necessity to assess the sensitivity of estimates towards different assumptions about the missing-data and selection mechanism if it seems realistic that the ignorability assumption might be violated, and provide first solutions to enable such robustness checks for specific data situations.
283

研發扣抵與兩稅合一之政策效果 ‒ 以台灣與 OECD 國家比較 / The policy effect of research & development tax credit and dividend imputation credit – International comparison between Taiwan and OECD countries

林奕成, Lin, Yih Cheng Unknown Date (has links)
研發扣抵政策之有效性在過去文獻有著不一致的結果,許多研究者認為可能原因之一即為與兩稅合一的衝突,實施兩稅合一之後,在有限的資金之下將增加公司發放股利的誘因;同樣的,在實施研發扣抵後亦將增加公司研發投資的金額,都會影響彼此的政策效果。   近年來台灣經歷獎勵投資條例、促進產業升級條例及現在的產業創新條例,其對投資之效果飽受爭論。而我國除採取研發扣抵政策外,亦實施兩稅合一政策以解決重複課稅之問題,因此在台灣兩稅合一與研發扣抵是否會互相衝突會是一項值得探討的議題。   本文以 1996 年至 2014 年台灣與 OECD上市公司的非均衡追蹤資料 (Unbalanced panel data) 來進行分析。實證結果指出,同時實施兩稅合一及研發扣抵的國家相較於其他樣本,其股利支付與研發投資之間的關係呈現更為顯著的負相關,代表當同時實施雙重扣抵制度,兩項支出之間的衝突性更為明顯。   本文另外也做了台灣與其他國家的比較,實證結果指出,台灣雖實施雙重扣抵制度,但其支出之間的關係,反而呈現較為顯著的正相關。可能的原因即為台灣之研發扣抵相較於兩稅合一,其誘因明顯為大,因此文末亦作了 difference in difference 的敏感度分析,但結果顯示不論是 1998 年兩稅合一或 2010 年產創條例實施後,研發投資與股利支付之間的關係並沒有顯著的改變。 / The effectiveness of R&D tax credit is inconsistent in past literature, and many researchers believe one possible reason is the impact of dividend imputation credit. After imputation credit, it will increase the company’s incentive to pay dividend. Also, after R&D tax credit, it will increase the payment of R&D investment. So both of the policy will affect the effect of each other. In recent years in Taiwan, we experienced Statute for the Encouragement of Investment, Statute for Upgrading Industry and current Statute for Industrial Innovation, and their effect on investment suffered controversy. In Taiwan, we have not only R&D tax credit, but also the implementation of dividend imputation to relieve the problem of double taxation, so it becomes an important issue. This paper examines the unbalanced panel data of Taiwan and OECD from 1996 to 2014. Empirical results indicate that in the context of both R&D tax credit and dividend imputation credit compared to the other sample, the negative correlation is more significant between the dividend payments and R&D investment. It means when we implement both credits, the payments of dividend and R&D conflict more. This paper also examines Taiwan with respect to OECD countries, and the empirical results indicate that although the implementation of both credits, the positive correlation is more significant between the two payments in Taiwan. One possible reason is that the R&D tax credit in Taiwan is obviously more attractive than the dividend imputation credit. Therefore, I also use the sensitive analysis of difference in difference to examine this problem. However, it shows that after the implementation of dividend imputation in 1998 or R&D tax credit of Statute for Upgrading Industry in 2010, the relationship of payments doesn’t differ obviously.
284

稅額扣抵比率及股權集中度對除權(息)股價之影響

丁文萍 Unknown Date (has links)
本文以除權息前後累積異常報酬率為應變數,探討稅額扣抵比率及股權集中度對除權息前後累積異常報酬率的影響。研究對象為民國88年至96年間分配盈餘的國內上市公司,排除行業性質特殊之金融業,以普通最小平方法從事實證模型分析。主要實證結果彙整如下: 1.稅額扣抵比率與除權息前之累積異常報酬率呈顯著正相關,與除權息後之累積異常報酬率呈負相關,但較不顯著。此表示稅額扣抵比率的租稅因素影響在除權息前較為顯著,但在除權息後現象較不顯著。 2.股權集中度與除權息前後之累積異常報酬率的關係均未達統計上顯著水準。產生此實證結果的可能解釋有二:(1)非稅成本的考量;(2)本文以股權集中度衡量可能並未真正捕捉到個人投資人的所得稅率。 3.低稅額扣抵比率類的除權息前累積異常報酬率,較高稅額扣抵比率(基準)類樣本為低的現象,此與預期相符。但在其他加入類別虛擬變數的迴歸結果,並未發現在不同稅額扣抵比率或股權集中度下,會對除權息前後之累積異常報酬率有不同的影響。 4.公司規模及股價淨值比與除權息前後累積異常報酬率均呈顯著正向關係;股利殖利率與除權息前累積異常報酬率呈顯著正向關係,而與除權息後之累積異常報酬率呈顯著負向關係。電子業別與除權息前之累積異常報酬率呈顯著負向關係,而與除權息後之累積異常報酬率呈顯著正向關係。 在圖表的分析中,可看出高稅額扣抵比率或低股權集中度的樣本,其除權息前後累積異常報酬率波動較小,較不受除權息事件的影響。 / The main purpose of this paper is to examine, before and after the ex-dividend day, the impacts of imputation credits and ownership concentration on cumulative abnormal returns(CARs). In this paper , CARs before and after the ex-dividend day are used as the dependent variable. The data are collected from the domestic listed companies which had allocated the earnings from 1999 to 2006. Because of its special characteristics, the financial industry is excluded from the data. In order to analyze the impacts of imputation credits and ownership concentration on CARs , we used the ordinary least squares. The empirical results in this paper are summarized as follows: 1.The imputation credits have a significant positive impact on CARs before the ex-dividend day, but they don’t have a significant negative impact on CARs after the ex-dividend day. This phenomenon implies that the influence of tax factor before the ex-dividend day is more significant than that after the ex-dividend day. 2.The failure of finding a significant relation between ownership concentration and CARs of before and after the ex-dividend day maybe due to two reasons. Frist, investors may not take tax factor into account when they invest the stock. Second, the proxy variable for ownership concentration of this study may not fully capture the marginal income rate of individuals. 3.CARs before the ex-dividend day in listed company with lower imputation credit are lower than that in listed company with higher imputation credit, the empirical result matchs general intuition. But other regressions with dummy variables regarding the degree of the imputation credit and ownership concentration don’t find significant relation among the imputation credit, ownership concentration and CARs before and after the ex-dividend day. 4.The size of companies and the ratio of market price to their book value have a significant positive impact on CARs before and after the ex-dividend day. Dividend yield has a significant positive impact on CARs before the ex-dividend day, but a negative impact on CARs after the ex-dividend day. A dummy variable standing for electronic industry has a significant negative impact on CARs before the ex-dividend day, but positive impact on CARs after the ex-dividend day. In the analysis of diagrams, we find CARs before and after the ex-dividend day fluctuate less for companies with higher imputed credit or lower ownership concentration.
285

Statistical HLA type imputation from large and heterogeneous datasets

Dilthey, Alexander Tilo January 2012 (has links)
An individual's Human Leukocyte Antigen (HLA) type is an essential immunogenetic parameter, influencing susceptibility to a variety of autoimmune and infectious diseases, to certain types of cancer and the likelihood of adverse drug reactions. I present and evaluate two models for the accurate statistical determination of HLA types for single-population and multi-population studies, based on SNP genotypes. Importantly, SNP genotypes are already available for many studies, so that the application of the statistical methods presented here does not incur any extra cost besides computing time. HLA*IMP:01 is based on a parallelized and modified version of LDMhc (Leslie et al., 2008), enabling the processing of large reference panels and improving call rates. In a homogeneous single-population imputation scenario on a mainly British dataset, it achieves accuracies (posterior predictive values) and call rates >=88% at all classical HLA loci (HLA-A, HLA-B, HLA-C, HLA-DQA1, HLA-DQB1, HLA-DRB1) at 4-digit HLA type resolution. HLA*IMP:02 is specifically designed to deal with multi-population heterogeneous reference panels and based on a new algorithm to construct haplotype graph models that takes into account haplotype estimate uncertainty, allows for missing data and enables the inclusion of prior knowledge on linkage disequilibrium. It works as well as HLA*IMP:01 on homogeneous panels and substantially outperforms it in more heterogeneous scenarios. In a cross-European validation experiment, even without setting a call threshold, HLA*IMP:02 achieves an average accuracy of 96% at 4-digit resolution (>=91% for all loci, which is achieved at HLA-DRB1). HLA*IMP:02 can accurately predict structural variation (DRB paralogs), can (to an extent) detect errors in the reference panel and is highly tolerant of missing data. I demonstrate that a good match between imputation and reference panels in terms of principal components and reference panel size are essential determinants of high imputation accuracy under HLA*IMP:02.
286

Sélection de modèle d'imputation à partir de modèles bayésiens hiérarchiques linéaires multivariés

Chagra, Djamila 06 1900 (has links)
Les logiciels utilisés sont Splus et R. / Résumé La technique connue comme l'imputation multiple semble être la technique la plus appropriée pour résoudre le problème de non-réponse. La littérature mentionne des méthodes qui modélisent la nature et la structure des valeurs manquantes. Une des méthodes les plus populaires est l'algorithme « Pan » de (Schafer & Yucel, 2002). Les imputations rapportées par cette méthode sont basées sur un modèle linéaire multivarié à effets mixtes pour la variable réponse. La méthode « BHLC » de (Murua et al, 2005) est une extension de « Pan » dont le modèle est bayésien hiérarchique avec groupes. Le but principal de ce travail est d'étudier le problème de sélection du modèle pour l'imputation multiple en termes d'efficacité et d'exactitude des prédictions des valeurs manquantes. Nous proposons une mesure de performance liée à la prédiction des valeurs manquantes. La mesure est une erreur quadratique moyenne reflétant la variance associée aux imputations multiples et le biais de prédiction. Nous montrons que cette mesure est plus objective que la mesure de variance de Rubin. Notre mesure est calculée en augmentant par une faible proportion le nombre de valeurs manquantes dans les données. La performance du modèle d'imputation est alors évaluée par l'erreur de prédiction associée aux valeurs manquantes. Pour étudier le problème objectivement, nous avons effectué plusieurs simulations. Les données ont été produites selon des modèles explicites différents avec des hypothèses particulières sur la structure des erreurs et la distribution a priori des valeurs manquantes. Notre étude examine si la vraie structure d'erreur des données a un effet sur la performance du choix des différentes hypothèses formulées pour le modèle d'imputation. Nous avons conclu que la réponse est oui. De plus, le choix de la distribution des valeurs manquantes semble être le facteur le plus important pour l'exactitude des prédictions. En général, les choix les plus efficaces pour de bonnes imputations sont une distribution de student avec inégalité des variances dans les groupes pour la structure des erreurs et une loi a priori choisie pour les valeurs manquantes est la loi normale avec moyenne et variance empirique des données observées, ou celle régularisé avec grande variabilité. Finalement, nous avons appliqué nos idées à un cas réel traitant un problème de santé. Mots clés : valeurs manquantes, imputations multiples, modèle linéaire bayésien hiérarchique, modèle à effets mixtes. / Abstract The technique known as multiple imputation seems to be the most suitable technique for solving the problem of non-response. The literature mentions methods that models the nature and structure of missing values. One of the most popular methods is the PAN algorithm of Schafer and Yucel (2002). The imputations yielded by this method are based on a multivariate linear mixed-effects model for the response variable. A Bayesian hierarchical clustered and more flexible extension of PAN is given by the BHLC model of Murua et al. (2005). The main goal of this work is to study the problem of model selection for multiple imputation in terms of efficiency and accuracy of missing-value predictions. We propose a measure of performance linked to the prediction of missing values. The measure is a mean squared error, and hence in addition to the variance associated to the multiple imputations, it includes a measure of bias in the prediction. We show that this measure is more objective than the most common variance measure of Rubin. Our measure is computed by incrementing by a small proportion the number of missing values in the data and supposing that those values are also missing. The performance of the imputation model is then assessed through the prediction error associated to these pseudo missing values. In order to study the problem objectively, we have devised several simulations. Data were generated according to different explicit models that assumed particular error structures. Several missing-value prior distributions as well as error-term distributions are then hypothesized. Our study investigates if the true error structure of the data has an effect on the performance of the different hypothesized choices for the imputation model. We concluded that the answer is yes. Moreover, the choice of missing-value prior distribution seems to be the most important factor for accuracy of predictions. In general, the most effective choices for good imputations are a t-Student distribution with different cluster variances for the error-term, and a missing-value Normal prior with data-driven mean and variance, or a missing-value regularizing Normal prior with large variance (a ridge-regression-like prior). Finally, we have applied our ideas to a real problem dealing with health outcome observations associated to a large number of countries around the world. Keywords: Missing values, multiple imputation, Bayesian hierarchical linear model, mixed effects model.
287

Comparação entre métodos de imputação de dados em diferentes intensidades amostrais na série homogênea de precipitação pluvial da ESALQ / Comparison between data imputation methods at different sample intensities in the ESALQ homogeneous rainfall series

Gasparetto, Suelen Cristina 07 June 2019 (has links)
Problemas frequentes nas análises estatísticas de informações meteorológicas são a ocorrência de dados faltantes e ausência de conhecimento acerca da homogeneidade das informações contidas no banco de dados. O objetivo deste trabalho foi testar e classificar a homogeneidade da série de precipitação pluvial da estação climatológica convencional da ESALQ, no período de 1917 a 1997, e comparar três métodos de imputação de dados, em diferentes intensidades amostrais (5%, 10% e 15%) de informações faltantes, geradas de forma aleatória. Foram utilizados três testes de homogeneidade da série: Pettitt, Buishand e normal padrão. Para o \"preenchimento\" das informações faltantes, foram comparados três métodos de imputação múltipla: PMM (Predictive Mean Matching), random forest e regressão linear via método bootstrap, em cada intensidade amostral de informações faltantes. Os métodos foram utilizados por meio do pacote MICE (Multivariate Imputation by Chained Equations) do R. A comparação entre cada procedimento de imputação foi feita por meio da raiz do erro quadrático médio, índice de exatidão de Willmott e o índice de desempenho. A série de chuva foi entendida como de classe 1, ou seja, \"útil\" - Nenhum sinal claro de falta de homogeneidade foi aparente e, o método que resultou em menores valores da raiz quadrada dos erros e maiores índices foi o PMM, em especial na intensidade de 10% de informações faltantes. O índice de desempenho para os três métodos de imputação de dados em todas as intensidades de observações faltantes foi considerado \"Péssimo\" / Frequent problems in the statistical analyzes of meteorological information are the occurrence of missing data and missing of knowledge about the homogeneity of the information contained in the data base. The objective of this work was to test and classify the homogeneity of the rainfall series of the conventional climatological station of the ESALQ from 1917 to 1997 and to compare three methods of data imputation in different sample intensities (5%, 10% and 15%), of missing data, generated in a random way. Three homogeneity tests were used: Pettitt, Buishand and standard normal. For the \"filling\" of missing information, three methods of multiple imputation were compared: PMM (Predictive Mean Matching), random forest and linear regression via bootstrap method, in each sampling intensity of missing information. The methods were used by means of the MICE (Multivariate Imputation by Chained Equations) package of R. The comparison of each imputation procedure was done by root mean square error, Willmott\'s accuracy index and performance index. The rainfall series was understood to be class 1 \"useful\" - No clear sign of lack of homogeneity was apparent and the method that resulted in smaller values of the square root of the errors and higher indexes was the PMM, in particular the intensity of 10% of missing information. The performance index for the three methods of imputation the data at all missing observation intensities was considered \"Terrible\"
288

Owner Occupied Housing in the CPI and its Impact on Monetary Policy during Housing Booms and Busts

Hill, Robert J., Steurer, Miriam, Waltl, Sofie R. 07 1900 (has links) (PDF)
The treatment of owner-occupied housing (OOH) is probably the most important unresolved issue in inflation measurement. How -- and whether -- it is included in the Consumer Price Index (CPI) affects inflation expectations, the measured level of real interest rates, and the behavior of governments, central banks and market participants. We show that none of the existing treatments of OOH are fit for purpose. Hence we propose a new simplified user cost method with better properties. Using a micro-level dataset, we then compare the empirical behavior of eight different treatments of OOH. Our preferred user cost approach pushes up the CPI during housing booms (by 2 percentage points or more). Our findings relate to the following important debates in macroeconomics: the behavior of the Phillips curve in the US during the global financial crisis, and the response of monetary policy to housing booms, secular stagnation, and globalization. / Series: Department of Economics Working Paper Series
289

Nexo de causalidade: o art. 13 do CP e a teoria da imputação objetiva

Lima, André Estefam Araújo 14 May 2008 (has links)
Made available in DSpace on 2016-04-26T20:27:14Z (GMT). No. of bitstreams: 1 Andre Estefam Araujo Lima.pdf: 382089 bytes, checksum: 81705bce4388799f369f9edeb1d03115 (MD5) Previous issue date: 2008-05-14 / This work examines the relation of causality through the theory of objective imputation, searching to verify under different approaches, which is the ideal criterion to attribute a normative result to a criminally relevant behavior. It was taken care of to discourse on the function of the Criminal law (under the optics of the doctrine and the Brazilian legislation) for, from then on, meeting the bases on which it intends to construct a correct theory of the imputation. It was analyzed the evolution of the criminal systems, since the classic until the funcionalist , in order to verify the approach that it was given, through them, to the causal nexus. The central argument of the work consists of firming position in the direction of that the causality nexus cannot be considered under an exclusively natural approach, otherwise it is to become the Criminal law into an appendix of Natural Sciences. In order to achieve this, it is necessary, in first place, to define in which system should the structure of the crime anchor. It must, then, be considered the existing peculiarities in our legal system, which contains the nexus of causality in the Criminal Code (art. 13). From these premises, this study considers a harmonization between the material relation of causality, as stated on the Code, and the theory of the objective imputation, as an adequate factor to restrict decurrent injustices of the rule foreseen in the Legal Text / Este trabalho examina a relação de causalidade à luz da teoria da imputação objetiva, buscando verificar, sob diferentes aspectos, qual o critério ideal para se atribuir determinado resultado normativo a um comportamento penalmente relevante. Cuidou-se de discorrer sobre a função do Direito Penal (sob a ótica da dogmática e da legislação brasileira) para, a partir daí, encontrar-se as bases sobre as quais se pretende construir uma correta teoria da imputação. Foi analisada a evolução dos sistemas penais, desde o clássico até o funcionalista, de modo a averiguar o enfoque que se deu, dentro deles, ao nexo causal. O argumento central do trabalho consiste em firmar posição no sentido de que o nexo de causalidade não pode ser considerado sob uma abordagem exclusivamente naturalística, sob pena de se converter o Direito Penal em apêndice das Ciências Naturais. Para isso, é necessário, em primeiro lugar, definir qual o sistema em que se deve ancorar a estrutura do crime. Deve-se, então, considerar as peculiaridades existentes em nosso ordenamento jurídico, o qual normatizou o nexo de causalidade no Código Penal (art. 13). A partir destas premissas, este estudo propõe uma harmonização entre a relação de causalidade material, conforme acolhida pelo Código, e a teoria da imputação objetiva, como fator adequado a restringir injustiças decorrentes da regra prevista no Texto Legal
290

A responsabilidade penal da pessoa jurídica por fato próprio : uma análise de seus critérios de imputação

Fabris, Gabriel Baingo 20 December 2016 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2017-06-23T14:14:11Z No. of bitstreams: 1 Gabriel Baingo Fabris_.pdf: 1151080 bytes, checksum: 75a40c2a7c383b9e8628ba538d1b3c3a (MD5) / Made available in DSpace on 2017-06-23T14:14:11Z (GMT). No. of bitstreams: 1 Gabriel Baingo Fabris_.pdf: 1151080 bytes, checksum: 75a40c2a7c383b9e8628ba538d1b3c3a (MD5) Previous issue date: 2016-12-20 / Nenhuma / Em meio às modificações sociais, passa-se a constatar que o Direito penal é chamado para resolver problemas que outrora eram inimagináveis. Ao passo que o campo de atuação deste se amplia, verifica-se que passa a englobar novos bens jurídicos, sobretudo de cunho coletivo, supraindividual. Como resultado desta expansão, amplia-se o âmbito de responsabilidades, estendendo-se à pessoa jurídica, percebendo-se essa tendência em outros ordenamentos jurídicos. A partir de uma metodologia sistêmico-construtivista, utiliza-se a técnica de pesquisa, a partir de pesquisa bibliográfica, sobretudo de teorias previamente analisadas e discorridas pela doutrina, a partir de suas produções bibliográficas, englobando, a presente pesquisa, também, textos legislativos e análise da perspectiva jurisprudencial acerca da opção político-criminal. Ao passo em que são evidenciados problemas quando da identificação da autoria em meio à atividade empresarial, surgem problemas quanto à atribuição de responsabilidades por meio das normas de imputação inerentes ao Direito penal. Como resposta, a doutrina identifica duas formas de resolvê-lo: utilizar as normas de imputação do indivíduo que atua no interior da empresa ou utilizar normas de imputação próprias à pessoa jurídica. Partindo do pressuposto de que deveriam ser utilizadas normas de imputação diretamente à pessoa jurídica, perante o desenvolvimento das atividades empresariais, faz-se necessária uma análise acerca da adequação das normas de imputação – ação, tipicidade subjetiva e culpabilidade – sobretudo para que possam permitir a atribuição desta responsabilidade. Para esta adequação, o desenvolvimento de uma teoria do delito é realizado com base em critérios próprios da pessoa jurídica, a partir de sua própria estrutura organizativa. Desta análise, verifica-se que a doutrina não é pacífica e, embora sucetível a críticas, busca uma solução para este problema. / Amid social changes, it becomes clear that Criminal law is called to solve problems that were once unimaginable. While the field of activity of this one is widening, it turns out to include new legal property, especially of a collective issue, supra-individual nature. As a result of this expansion, the range of responsibilities is widen, extending to the legal person, perceiving this tendency in other legal systems. From a systemic-constructivist methodology approach, the research technique is used based on a bibliographical research, mainly on theories previously analyzed and discussed by the doctrine, based on its bibliographic productions, encompassing, the present research, also, legislative texts and analysis of the jurisprudential perspective on the political-criminal option. Whereas problems are shown when identifying authorship among the business activity, problems come to light while regarding the attribution of responsibilities through the rules of imputation inherent in Criminal law. As a response, the doctrine identifies two ways of solving it: using the rules of imputation from the individual that operates inside the enterprise or using rules of imputation specific to the legal entity. Assuming that the rules of imputation should be used directly to the legal entity, towards the development of business activities, an analysis is required about the adequacy of the imputation rules - action, subjective typicity and culpability – especially so that they can allow the attribution of this liabillity. For this adequacy, the development of a theory of crime is made from own criteria of the legal entity, from its own organizational structure. From this analysis, it turns out that the doctrine is not peaceful and, although susceptible to criticism, seeks a solution to this problem.

Page generated in 0.1822 seconds