• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 44
  • 20
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 154
  • 154
  • 37
  • 35
  • 23
  • 20
  • 20
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Evaluation and Optimization of Deep Learning Networks for Plant Disease Forecasting And Assessment of their Generalizability for Early Warning Systems

Hannah Elizabeth Klein (15375262) 05 May 2023 (has links)
<p>This research focused on developing adaptable models and protocols for early warning systems for forecasting plant diseases and datasets. It compared the performance of deep learning models in predicting soybean rust disease outbreaks using three years of public epidemiological data and gridded weather data. The models selected were a dense network and a Long Short-Term Memory (LSTM) network. The objectives included evaluating the effectiveness of small citizen science datasets and gridded meteorological weather in sequential forecasting, assessing the ideal window size and important inputs, and exploring the generalizability of the model protocol and models to other diseases. The model protocol was developed using a soybean rust dataset. Both the dense and the LSTM networks produced accuracies of over 90% during optimization. When tested for forecasting, both networks could forecast with an accuracy of 85% or higher over various window sizes. Experiments on window size indicated a minimum input of 8 -11 days. Generalizability was demonstrated by applying the same protocol to a southern corn rust dataset, resulting in 87.8% accuracy. In addition, transfer learning and pre-trained models were tested. Direct transfer learning between disease was not successful, while pre training models resulted both positive and negative results. Preliminary results are reported for building generalizable disease models using epidemiological and weather data that researchers could apply to generate forecasts for new diseases and locations.</p>
122

[pt] AJUSTE ÓTIMO POR LEVENBERG-MARQUARDT DE MÉTODOS DE PREVISÃO PARA INICIAÇÃO DE TRINCA / [en] OPTIMAL FIT BY LEVENBERG-MARQUARDT OF PREDICTION METHODS FOR CRACK INITIATION

GABRIELA WEGMANN LIMA 01 November 2022 (has links)
[pt] A grande maioria das estruturas que trabalham sob cargas alternadas precisa ser dimensionada para evitar a iniciação de trincas por fadiga, o principal mecanismo de dano mecânico nesses casos. Os vários parâmetros dos modelos de previsão de dano à fadiga usados nesses projetos devem ser preferencialmente medidos a partir do ajuste otimizado de suas equações a dados experimentais medidos de forma adequada. Na realidade, a precisão das previsões baseadas nesses modelos depende diretamente da qualidade dos ajustes utilizados para obtenção desses parâmetros. Sendo assim, o objetivo principal deste trabalho é estudar a melhor maneira de se obter os parâmetros dos principais modelos de previsão da iniciação de trincas por fadiga através de ajustes de dados experimentais baseados no algoritmo de LevenbergMarquardt. Primeiro, foram realizados diversos ensaios εN em uma liga de alumínio 6351-T6 para averiguar o desempenho do ajuste proposto para asequações de Coffin-Manson e de Ramberg-Osgood. Em seguida, foram usados dados da literatura de outros oito materiais para ajustar modelos deformaçãovida clássicos, assim como com o expoente de Walker, para assim avaliar o efeito de cargas médias não-nulas em testes εN. Por fim, foi estudado o ajuste de um modelo SN com expoente de Walker que considera limites de fadiga e efeitos de carga média. Esse estudo também inclui considerações estatísticas para quantificar o fator de confiabilidade a partir de diferentes hipóteses de funções densidade de probabilidade, baseadas em dez conjuntos de dados da literatura. / [en] Most structures working under alternate loadings must be dimensioned to prevent fatigue crack initiation, the main mechanism of mechanical damage in these cases. The various parameters from the fatigue damage prediction models used in these projects should preferably be measured by optimally fitting their equations to well-measured experimental data. In fact, the accuracy of the predictions based on these models depends directly on the quality of the adjustments used to obtain these parameters. As a result, the main purpose of this work is to study the best way to obtain the parameters of the leading prediction models of fatigue crack initiation through experimental data fittings based on the Levenberg-Marquardt algorithm. First, several εN tests were performed on a 6351-T6 aluminum alloy to verify the performance of the proposed fit for the Coffin-Manson and Ramberg-Osgood equations. Then, data from the literature of eight other materials were used to fit classic strainlife models, as well as models based on the Walker exponent, to evaluate the effect of non-zero mean loads in εN tests. Finally, the fitting of an SN model including the Walker exponent was studied, which considers fatigue limits and mean load effects. This study includes as well statistical considerations to quantify the reliability factor from different probability density function assumptions, based on ten data sets from the literature.
123

Using Portable X-ray Fluorescence to Predict Physical and Chemical Properties of California Soils

Frye, Micaela D 01 August 2022 (has links) (PDF)
Soil characterization provides the basic information necessary for understanding the physical, chemical, and biological properties of soils. Knowledge about soils can in turn be used to inform management practices, optimize agricultural operations, and ensure the continuation of ecosystem services provided by soils. However, current analytical standards for identifying each distinct property are costly and time-consuming. The optimization of laboratory grade technology for wide scale use is demonstrated by advances in a proximal soil sensing technique known as portable X-ray fluorescence spectrometry (pXRF). pXRF analyzers use high energy Xrays that interact with a sample to cause characteristic reflorescence that can be distinguished by the analyzer for its energy and intensity to determine the chemical composition of the sample. While pXRF only measures total elemental abundance, the concentrations of certain elements have been used as a proxy to develop models capable of predicting soil characteristics. This study aimed to evaluate existing models and model building techniques for predicting soil pH, texture, cation exchange capacity (CEC), soil organic carbon (SOC), total nitrogen (TN), and C:N ratio from pXRF spectra and assess their fittingness for California soils by comparing predictions to results from laboratory methods. Multiple linear regression (MLR) and random forest (RF) models were created for each property using a training subset of data and evaluated by R2 , RMSE, RPD and RPIQ on an unseen test set. The California soils sample set was comprised of 480 soil samples from across the state that were subject to laboratory and pXRF analysis in GeoChem mode. Results showed that existing data models applied to the CA soils dataset lacked predictive ability. In comparison, data models generated using MLR with 10-fold cross validation for variable selection improved predictions, while algorithmic modeling produced the best estimates for all properties besides pH. The best models produced for each property gave RMSE values of 0.489 for pH, 10.8 for sand %, 6.06 for clay % (together predicting the correct texture class 74% of the time), 6.79 for CEC (cmolc/kg soil), 1.01 for SOC %, 0.062 for TN %, and 7.02 for C:N ratio. Where R2 and RMSE were observed to fluctuate inconsistently with a change in the random train/test splits, RPD and RPIQ were more stable, which may indicate a more useful representation of out of sample applicability. RF modeling for TN content provided the best predictive model overall (R2 = 0.782, RMSE = 0.062, RPD = 2.041, and RPIQ = 2.96). RF models for CEC and TN % achieved RPD values >2, indicating stable predictive models (Cheng et al., 2021). Lower RPD values between 1.75 and 2 and RPIQ >2 were also found for MLR models of CEC, and TN %, as well as RF models for SOC. Better estimates for chemical properties (CEC, N, SOC) when compared to physical properties (texture), may be attributable to a correlation between elemental signatures and organic matter. All models were improved with the addition of categorical variables (land-use and sample set) but came at a great statistical cost (9 extra predictors). Separating models by land type and lab characterization method revealed some improvements within land types, but these effects could not be fully untangled from sample set. Thus, the consortia of characterizing bodies for ‘true’ lab data may have been a drawback in model performance, by confounding inter-lab errors with predictive errors. Future studies using pXRF analysis for soil property estimation should investigate how predictive v models are affected by characterizing method and lab body. While statewide models for California soils provided what may be an acceptable level of error for some applications, models calibrated for a specific site using consistent lab characterization methods likely provide a higher degree of accuracy for indirect measurements of some key soil properties.
124

Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification

Zavar Moosavi, Azam Sadat 13 March 2018 (has links)
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes. / Ph. D.
125

Konkursprognostisering : En tillämpning av tre internationella modeller

Malm, Hanna, Rodriguez, Edith January 2015 (has links)
Bakgrund: Varje år går många företag i konkurs och detta innebär stora kostnader på kort sikt. Kreditgivare, ägare, investerare, borgenärer, företagsledning, anställda samt samhället är de som i störst utsträckning drabbas av detta. För att kunna bedöma ett företags ekonomiska hälsa är det därför en viktig del att kunna prognostisera risken för en konkurs. Till hjälp har vi olika konkursmodeller som har utvecklats sedan början av 1960-talet och fram till idag. Syfte: Att undersöka tre internationella konkursmodeller för att se om dessa kan tillämpas på svenska företag samt jämföra träffsäkerheten från vår studie med konkursmodellernas originalstudier. Metod: Undersökningen är baserad på en kvantitativ forskningsstrategi med en deduktiv ansats. Urvalet grundas på företag som gick i konkurs år 2014. Till detta kommer också en kontrollgrupp bestående av lika stor andel friska företag att undersökas. Det slumpmässiga urvalet kom att bestå av 30 konkursföretag samt 30 friska företag från tillverknings- och industribranschen. Teori: I denna studie undersöks tre konkursmodeller; Altman, Fulmer och Springate. Dessa modeller och tidigare forskning presenteras utförligare i teoriavsnittet. Dessutom beskrivs under teoriavsnittet några nyckeltal som är relevanta vid konkursprediktion. Resultat och slutsats: Modellerna är inte tillämpbara på svenska företag då resultaten från vår studie inte visar tillräcklig träffsäkerhet och är därför måste betecknas som otillförlitliga. / Background: Each year many companies go bankrupt and it is associated with significant costs in the short term. Creditors, owners, investors, management, employees and society are those that gets most affected by the bankruptcy. To be able to estimate a company’s financial health it is important to be able to predict the risk of a bankruptcy. To help, we have different bankruptcy prediction models that have been developed through time, since the 1960s until today, year 2015. Purpose: To examine three international bankruptcy prediction models to see if they are  applicable to Swedish business and also compare the accuracy from our study with each bankruptcy prediction models original study. Method: The study was based on a quantitative research strategy and also a deductive research approach. The selection was based on companies that went bankrupt in year 2014. Added to this is a control group consisting of healthy companies that will also be examined. Finally, the random sample consisted of 30 bankrupt companies and 30 healthy companies that belong to the manufacturing and industrial sectors. Theory: In this study three bankruptcy prediction models are examined; Altman, Fulmer and Springate. These models and also previous research in bankruptcy prediction are further described in the theory section. In addition some financial ratios that are relevant in bankruptcy prediction are also described. Result and conclusion: The models are not applicable in the Swedish companies.  The results of this study have not showed sufficient accuracy and they can therefore be regarded as unreliable.
126

An analysis of a dust storm impacting Operation Iraqi Freedom, 25-27 March 2003

Anderson, John W. 12 1900 (has links)
Approved for public release; distribution in unlimited. / On day five of combat operations during Operation IRAQI FREEDOM, advances by coalition forces were nearly halted by a dust storm, initiated by the passage of a synoptically driven cold front. This storm impacted ground and air operations across the entire Area of Responsibility, and delayed an impending ground attack on the Iraqi capital. Military meteorologists were able to assist military planners in mitigating at least some of the effects of this storm. This thesis examines the synoptic conditions leading to the severe dust storm, evaluates the numerical weather prediction model performance in predicting the event, and reviews metrics pertaining to the overall impacts on the Operation IRAQI FREEDOM combined air campaign. In general, the numerical model guidance correctly predicted the location and onset of the dust storms on 25 March, 2003. As a result of this forecast guidance, mission planners were able to front load Air Tasking Orders with extra sorties prior to the onset of the dust storm, and were able to make changes to planned weapons loads, favoring GPS-guided munitions. / Captain, United States Air Force
127

Mensuração da biomassa e construção de modelos para construção de equações de biomassa / Biomass measurement and models selection for biomass equations

Vismara, Edgar de Souza 07 May 2009 (has links)
O interesse pela quantificação da biomassa florestal vem crescendo muito nos últimos anos, sendo este crescimento relacionado diretamente ao potencial que as florestas tem em acumular carbono atmosférico na sua biomassa. A biomassa florestal pode ser acessada diretamente, por meio de inventário, ou através de modelos empíricos de predição. A construção de modelos de predição de biomassa envolve a mensuração das variáveis e o ajuste e seleção de modelos estatísticos. A partir de uma amostra destrutiva de de 200 indivíduos de dez essências florestais distintas advindos da região de Linhares, ES., foram construídos modelos de predição empíricos de biomassa aérea visando futuro uso em projetos de reflorestamento. O processo de construção dos modelos consistiu de uma análise das técnicas de obtenção dos dados e de ajuste dos modelos, bem como de uma análise dos processos de seleção destes a partir do critério de Informação de Akaike (AIC). No processo de obtenção dos dados foram testadas a técnica volumétrica e a técnica gravimétrica, a partir da coleta de cinco discos de madeira por árvore, em posições distintas no lenho. Na técnica gravimétrica, estudou-se diferentes técnicas de composição do teor de umidade dos discos para determinação da biomassa, concluindo-se como a melhor a que utiliza a média aritmética dos discos da base, meio e topo. Na técnica volumétrica, estudou-se diferentes técnicas de composição da densidade do tronco com base nas densidades básicas dos discos, concluindo-se que em termos de densidade do tronco, a média aritmética das densidades básicas dos cinco discos se mostrou como melhor técnica. Entretanto, quando se multiplica a densidade do tronco pelo volume deste para obtenção da biomassa, a utilização da densidade básica do disco do meio se mostrou superior a todas as técnicas. A utilização de uma densidade básica média da espécie para determinação da biomassa, via técnica volumétrica, se apresentou como uma abordagem inferior a qualquer técnica que utiliza informação da densidade do tronco das árvores individualmente. Por fim, sete modelos de predição de biomassa aérea de árvores considerando seus diferentes compartimentos foram ajustados, a partir das funções de Spurr e Schumacher-Hall, com e sem a inclusão da altura como variável preditora. Destes modelos, quatro eram gaussianos e três eram lognormais. Estes mesmos sete modelos foram ajustados incluindo a medida de penetração como variável preditora, totalizando quatorze modelos testados. O modelo de Schumacher-Hall se mostrou, de maneira geral, superior ao modelo de Spurr. A altura só se mostrou efetiva na explicação da biomassa das árvores quando em conjunto com a medida de penetração. Os modelos selecionados foram do grupo que incluíram a medida de penetração no lenho como variável preditora e , exceto o modelo de predição da biomassa de folhas, todos se mostraram adequados para aplicação na predição da biomassa aérea em áreas de reflorestamento. / Forest biomass measurement implies a destructive procedure, thus forest inventories and biomass surveys apply indirect procedure for the determination of biomass of the different components of the forest (wood, branches, leaves, roots, etc.). The usual approch consists in taking a destructive sample for the measurment of trees attributes and an empirical relationship is established between the biomass and other attributes that can be directly measured on standing trees, e.g., stem diameter and tree height. The biomass determination of felled trees can be achived by two techniques: the gravimetric technique, that weights the components in the field and take a sample for the determination of water content in the laboratory; and the volumetric technique, that determines the volume of the component in the field and take a sample for the determination of the wood specific gravity (wood basic density) in the laboratory. The gravimetric technique applies to all components of the trees, while the volumetric technique is usually restricted to the stem and large branches. In this study, these two techniques are studied in a sample fo 200 trees of 10 different species from the region of Linhares, ES. In each tree, 5 cross-sections of the stem were taken to investigate the best procedure for the determination of water content in gravimetric technique and for determination of the wood specific gravity in the volumetric technique. Also, Akaike Information Criterion (AIC) was used to compare different statistical models for the prediction o tree biomass. For the stem water content determination, the best procedure as the aritmetic mean of the water content from the cross-sections in the base, middle and top of the stem. In the determination of wood specific gravity, the best procedure was the aritmetic mean of all five cross-sections discs of the stem, however, for the determination of the biomass, i.e., the product of stem volume and wood specific gravity, the best procedure was the use of the middle stem cross-section disc wood specific gravity. The use of an average wood specific gravity by species showed worse results than any procedure that used information of wood specific gravity at individual tree level. Seven models, as variations of Spurr and Schumacher-Hall volume equation models, were tested for the different tree components: wood (stem and large branches), little branches, leaves and total biomass. In general, Schumacher-Hall models were better than Spurr based models, and models that included only diameter (DBH) information performed better than models with diameter and height measurements. When a measure of penetration in the wood, as a surrogate of wood density, was added to the models, the models with the three variables: diameter, height and penetration, became the best models.
128

Predição de mudanças conjuntas de artefatos de software com base em informações contextuais / Predicting co-changes of software artifacts based on contextual information

Wiese, Igor Scaliante 18 March 2016 (has links)
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software / Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
129

Internationella komparativa studier av lagar om tvångsvård vid missbruk : -omfattning, trender och mänskliga rättigheter

Israelsson, Magnus January 2013 (has links)
The Universal Declaration of Human Rights and Fundamental Freedoms and the International Covenant on Economic, Social and Cultural Rights state that everyone has the right to good health. According to the conventions, the states have obligations to prevent and combat disease, and if necessary, ensure that the conditions for treatment of the disease are appropriate (UDHR 1948, UNCESCR 1966). The broad wording in the conventions on the right to good health includes the right to care of substance use disorders. In the 1960ies the World Health Organization recommended, that people with such disorders should be seen as sick and that the legislation governing such care should be in accordance with special administrative legislations and not criminal legislation. The recommendation indicates WHO:s clear position that persons with substance use disorders primarily should be treated as persons suffering from disease and in need of care, and not primarily as disruptive individuals or criminals who should be disciplined or punished. This applies also to situations when treatment and care cannot be provided on a voluntary basis, but compulsorily. In Swedish context, the most commonly mentioned law in these cases is the social special legislation Law (1988: 870) on care of misusers, special provisions (LVM). Ever since the implementation of LVM in 1982, its legal position as well as application in institutional care has been subject of critical discussions within social work as well as in social science research. Such debate in the Nordic countries has until now mostly been marked by two important limitations. First, most comparisons are restricted to very few countries, e.g. four of the Nordic countries; secondly the notion of involuntary care is often limited to social legislation on compulsory care without taking criminal justice legislation or mental health legislation into account. The present dissertation studies legislations on compulsory commitment to care of persons with substance use problems (CCC), and compares these legislations from a larger number of countries, on global or European levels. This approach makes it possible to explore the great variation in CCC legislation between countries, i.e. type of law (criminal justice, mental health care and social or special legislation),  time limits (maximum duration) as well as levels of ambition, ethical grounds, criteria for admission, and adaption to human and civil rights.  In addition, the comparisons between many countries are used to investigate factors related to different national choices in legislations from country characteristics, e.g. historical and cultural background as well as economic and social conditions, including level and type of welfare distribution. Available datasets from different times permits trend analyses to investigate whether CCC or specific types of such are increasing or decreasing internationally.          Empirical materials: Article I is based on three reports from the WHO on existence of CCC legislation, before the millennium shift, in 90 countries and territories in all populated continents. Articles II and IV are based on own data collection from a survey in 38 European countries. Article III uses a combination of those data and additional information from country reports in scientific and institutional publications in three times of observation during more than 25 years, and including a total of 104 countries. Additional data for Articles I and II are information on various countries' characteristics obtained from different international databases.          Findings based on data from WHO reports at the eve of the millennium show that CCC legislation was very common in the world, since 82 per cent of the 90 countries and territories had such law. Special administrative (“civil”) legislation (mental health or social) was somewhat more prevalent (56 %), but CCC in criminal justice legislation was also frequent and present in half of the countries. The study shows that economically stronger countries in the western world and many of the former communist countries in Eastern Europe, the so-called "first and second worlds" in cold war rhetoric, more often had adapted to the recommendations made by WHO in the 1960ies, with CCC more often regulated in civil legislation. In the so-called "third world" countries, CCC in criminal justice legislation dominated. The new data collection from 38 European countries ten years later confirmed that legislation on CCC is very common, since 74 per cent of the explored countries have some type of legislation. The most common type was now CCC in criminal legislation (45%), although special administrative legislation (mental health or social) was almost equally common (37%). Special administrative legislation on CCC (both acute and rehabilitative), was more common in countries with historic experience of a strong influential temperance movement, and in countries with distribution of health and welfare more directed through the state, while countries with less direct government involvement in distribution of health and welfare and lacking former influence of a strong temperance movement more often had CCC in criminal justice legislation. During all the 25 years period from early 80ies up to 2009, it was more common for countries to have some type of law on CCC than not, although some reduction of CCC legislation is shown, especially during the last decade. But within countries having CCC, more cases are compulsorily committed and for longer time duration. This is related to a global shift from civil CCC to CCC in criminal justice legislation, directly in the opposite direction from what WHO recommended in the 60ies. Changes in CCC legislation are often preceded with national political debate on ethical considerations, and criticisms questioning the efficiency and content of the care provided. Such national debates are frequent with all types of CCC legislation, but ethical considerations seem to be far more common related to special administrative (civil) legislation. National legislations on CCC within Europe should conform to the human and civil rights stipulated in ECHR (1950). There seems, to be some limitations in the procedural rules that should protect persons with misuse or dependence problems from unlawful detentions, regardless type of law. The three types of law differ significantly in terms of criteria for CCC, i.e. the situations in which care may be ensured regardless of consent.        Conclusions: It is more common that societies have legislation on CCC, than not. This applies internationally – in all parts of the world as well as over time, for a period of 25 years, at least. Sweden’s legislative position is not internationally unique; on the contrary, it is quite common. Law on CCC tend to be introduced in times of drug epidemics or when drug-related problems are increasing in a society. Changes in CCC legislation are often preceded by national debates on ethics, content and benefits of such care. These findings here discussed may reflect different concurrent processes. A shift from welfare logic to a moral logic may be understood as more moralization, perhaps due to relative awaking of traditionalism related to religious movements in various parts of the world (Christian, Hindu, Muslim or other). But it may also be understood from more libertarianism that stresses both individual responsibility for one’s welfare and the state´s responsibility to discipline behaviours that inflict negatively on the lives of others. Possibly do these two tendencies work in conjunction to one another. At the same time, however, there is a stronger emphasis on care content within criminal justice CCC, especially in the Anglo-Saxon drug court system. Some shift within Civil CCC is also noticed, i.e. from social to mental health legislation. Thus drug abuse and dependence is increasingly more recognized and managed in the same way as other diseases, i.e. an increased normalization. Since social CCC has been more in focus of research and debates, this may also result in CCC turning into a more hidden praxis, which from ethical perspectives is problematic. The thesis shows that there are examples of focus on humanity and care in all three of the law types, but there are also examples of passive care, sometimes even inhumane and repressive, in all types. Thus, type of law cannot be said to in general correspond to a specific content of care. Although CCC can be delivered in accordance with human and civil rights, there is still a dissatisfying situation concerning the procedural rights that should ensure the misuser his/her rights to freedom from unlawful detention. The possibility to appeal to a higher instance is missing in about 20 percent of European CCC laws, although not differentiating one type of legislation from the others. A clear difference between the three law types concerns criteria that form the basis for who will be provided care according to the laws. This is of major importance for which persons of the needy who will receive care: addicted offenders, out-acting persons or the most vulnerable. The criteria for selecting these relate to the implicit ambitions of CCC – correction, protection, or for support to those in greatest need for care. The question is what ambition a society should have concerning care without consent in case of substance abuse and addiction problems. The trend that CCC according to special administrative legislation is declining and criminal legislation increases in the world should therefore be noticed.     Keywords: Alcohol, drugs, substance misuse, coercive care, compulsory commitment to care, involuntary care, mandatory care, legislation, human and civil rights, comparative analysis, prediction models, and trend analysis / <p>Vid tidpunkten för disputationen var följande delarbeten opublicerade: delarbete 4 inskickat.</p><p>At the time of the doctoral defence the following papers were unpublished: paper 4 submitted.</p>
130

A confiança do consumidor como previsor da produção industrial: um modelo alternativo

Ferreira, Gabriel Goulart 25 May 2009 (has links)
Submitted by Gabriel Goulart Ferreira (gabrielggfsc@gmail.com) on 2017-08-24T21:23:43Z No. of bitstreams: 1 FGV Dissert - Final.pdf: 1134356 bytes, checksum: bae808529697b0480d6465ae8534ccf7 (MD5) / Approved for entry into archive by GILSON ROCHA MIRANDA (gilson.miranda@fgv.br) on 2017-08-25T13:40:06Z (GMT) No. of bitstreams: 1 FGV Dissert - Final.pdf: 1134356 bytes, checksum: bae808529697b0480d6465ae8534ccf7 (MD5) / Made available in DSpace on 2017-08-31T12:51:18Z (GMT). No. of bitstreams: 1 FGV Dissert - Final.pdf: 1134356 bytes, checksum: bae808529697b0480d6465ae8534ccf7 (MD5) Previous issue date: 2009-05-25 / This paper presents the analysis about the importance of the Consumer Confidence Indexes in the United States and the potential utilization of the Brazil’s similar index, the ICC FGV, to predict and track performance local economy. As practical result, this paper proposes a new alternative model to predict, in the short term, Monthly Industrial Production (PIM), a nationwide survey of industrial activity. / Esta dissertação apresenta análise sobre a importância dos indicadores de Confiança do Consumidor nos EUA e o potencial de utilização do indicador paralelo nacional, o ICC FGV, para previsão e acompanhamento do desempenho da economia brasileira. Como resultado prático, faz-se a proposição de novo modelo alternativo para previsão de curto prazo da PIM, Pesquisa Mensal Industrial do IBGE.

Page generated in 0.4321 seconds