• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 11
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 23
  • 23
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Modelo para a avaliação do risco de crédito de municípios brasileiros / Model for the evaluation of the credit risk of Brazilian cities

Ernesto Fernando Rodrigues Vicente 22 January 2004 (has links)
Tanto na área pública como na área privada, as necessidades de financiamento são diretamente proporcionais às decisões de investimento. Para cada unidade monetária a ser investida há a necessidade de se obter fundos para o financiamento desse investimento. Quando são levantadas questões sobre o assunto –necessidades de financiamento- e essas questões são associadas às finanças municipais, surge uma lacuna para a qual, até o momento, não há estudos e/ou pesquisas que forneçam uma resposta sobre como medir o risco de crédito dos municípios brasileiros. A busca dessa resposta é o objetivo deste trabalho. A pesquisa bibliográfica forneceu o aporte teórico, tanto em finanças e crédito, como no uso de modelos econométricos. A análise de modelos de insolvência, aplicados a empresas, contribuiu para orientar os modelos que poderiam ser testados e possivelmente orientados para a análise do risco de crédito dos municípios. A Lei de Responsabilidade Fiscal (LRF), como uma primeira medida para iniciar o processo de gestão responsável, e, provavelmente, em um futuro próximo, a obrigatoriedade de divulgação dos demonstrativos financeiros e auditorias independentes sejam também componentes obrigatórios na gestão municipal, como também a adoção de “ratings” municipais, contribuíram para a motivação do desenvolvimento de um modelo de risco de crédito de municípios . Após a obtenção dos dados financeiros dos municípios brasileiros (no sitio da Secretaria do Tesouro Nacional), dos dados demográficos (disponibilizados em CD pelo Instituto Brasileiro de Geografia e Estatística - Base de informações municipais 3), da opinião de diversos especialistas sobre seu conceito em relação ao risco de crédito apresentado por diversos municípios, do tratamento desses dados e da constituição de um banco de dados integrando todas as informações selecionadas, e aplicando-se a análise estatística discriminante ao banco de dados obtido, obteve-se um modelo estatístico com um nível de acerto aproximado de 70% / As many on public area as on private area, the financing needs are relative to investment decisions. For each monetary unit to be invested there is need to obtain funds to financing. When questions are made about this issues –financing needs- and those questions are associated to municipal finances, one hiatus appears at this moment, wich there wasn’t studies or researches to be able to provide a reply or a solution on the subject to measure the brazilians municipal credit risk. The search for this solution is the subject of the present work. The bibliographic research provide the theoretical base, as many in finances and credit, as econometrics modeling. The bankrupt modeling analysis applied to companies, contributed to orient the templates that could be tested and possibly oriented to municipal credit risk analysis. A special Law of Fiscal Responsibility (LRF), is the first rule to begin the responsible management process, and probably, in the near future, the obligation of disclosure the financial statements, and independent audits that may be the mandatory components on municipal management, as well as the adoption or acceptance of municipal ratings contributed to the motivation to development of one model of municipal credit risk. After the attainment of brazilian cities financial information, from the official National Treasure site, demographic data (available in CD of Brazilian Institut of Geography & Statistics’ database of municipal information), and about expertise’s judgments on the subject of concept in relation to credit risk presented for many cities, about the treatment of these information and the creation of a database that grant the full integration of selected information, applying the discriminant function analysiys to the database obtained, resulted a statistic model that hit a target level with approximatly 70%.
112

Modélisation statistique et dynamique de la composition de la graine de tournesol (Helianthus annuus L.) sous l’influence de facteurs agronomiques et environnementaux / Statistical and dynamic modeling of sunflower (Helianthus annuus L.) grain composition under agronomic and environmental factors effects

Andrianasolo, Fety Nambinina 14 November 2014 (has links)
Pour répondre à la demande mondiale croissante en huile et en protéines, le tournesol apparaît comme une culture très compétitive grâce à la diversification de ses débouchés et son attractivité environnementale et nutritionnelle. Pourtant, les teneurs en huile et protéines sont soumises à des effets génotypiques et environnementaux qui les rendent fluctuantes et difficilement prédictibles. Nous argumentons qu’une meilleure connaissance des effets les plus importants et leurs interactions devrait permettre de mieux prédire ces teneurs. Deux approches de modélisation ont été développées. Dans la première, trois modèles statistiques ont été construits puis comparés à un modèle simple existant. L’approche dynamique est basée sur l’analyse des relations source-puits au champ et en serre (2011 et 2012) pendant le remplissage. Les performances et domaines de validité des deux types de modélisation sont comparés. / Considering the growing global demand for oil and protein, sunflower appears as a highly competitive crop, thanks to the diversification of its markets and environmental attractiveness and health. Yet the protein and oil contents are submitted to genotypic and environmental effects that make them fluctuating and hardly predictable. We argue that a better knowledge of most important effects and their interactions should permit to improve prediction. Two modeling approaches are proposed: statistical one, where we compared three types of statistical models with a simple existing one. The dynamic approach is based on source-sink relationships analysis (field and greenhouse experiments in 2011 and 2012) during grain filling. Performances of both modeling types and their validity domain are compared.
113

Use of In-Stream Water Quality Measurements and Geospatial Parameters to Predict Consumer Surfactant Toxic Units in the Upper Trinity River Watershed, Texas

Johnson, David Richard 05 1900 (has links)
Surfactants are used in a wide assortment of "down-the-drain" consumer products, yet they are often discharged in wastewater treatment plant effluent into receiving water, potentially causing environmental harm. The objective of this project was to predict surfactant toxic units and in-stream nutrients in the upper Trinity River watershed. Surface and pore water samples were collected in late summer 2005. General chemistries and surfactant toxic units were calculated. GIS models of anthropogenic and natural factors were collected and analyzed according to subwatersheds. Multiple regression analyses using the Maximum R2 improvement method were performed to predict surfactant toxic units and in-stream nutrients using GIS and in-stream values. Both geospatial and in-stream parameters generated multiple regression models for surfactant surface and pore water toxic units, as well as in-stream nutrients, with high R2 values. Thus, GIS and in-stream parameter modeling have the potential to be reliable and inexpensive method of predicting surfactant toxic units and nutrient loading in the upper Trinity River watershed.
114

Computational modeling for identification of low-frequency single nucleotide variants

Hao, Yangyang 16 November 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Reliable detection of low-frequency single nucleotide variants (SNVs) carries great significance in many applications. In cancer genetics, the frequencies of somatic variants from tumor biopsies tend to be low due to contamination with normal tissue and tumor heterogeneity. Circulating tumor DNA monitoring also faces the challenge of detecting low-frequency variants due to the small percentage of tumor DNA in blood. Moreover, in population genetics, although pooled sequencing is cost-effective compared with individual sequencing, pooling dilutes the signals of variants from any individual. Detection of low frequency variants is difficult and can be cofounded by multiple sources of errors, especially next-generation sequencing artifacts. Existing methods are limited in sensitivity and mainly focus on frequencies around 5%; most fail to consider differential, context-specific sequencing artifacts. To face this challenge, we developed a computational and experimental framework, RareVar, to reliably identify low-frequency SNVs from high-throughput sequencing data. For optimized performance, RareVar utilized a supervised learning framework to model artifacts originated from different components of a specific sequencing pipeline. This is enabled by a customized, comprehensive benchmark data enriched with known low-frequency SNVs from the sequencing pipeline of interest. Genomic-context-specific sequencing error model was trained on the benchmark data to characterize the systematic sequencing artifacts, to derive the position-specific detection limit for sensitive low-frequency SNV detection. Further, a machine-learning algorithm utilized sequencing quality features to refine SNV candidates for higher specificity. RareVar outperformed existing approaches, especially at 0.5% to 5% frequency. We further explored the influence of statistical modeling on position specific error modeling and showed zero-inflated negative binomial as the best-performed statistical distribution. When replicating analyses on an Illumina MiSeq benchmark dataset, our method seamlessly adapted to technologies with different biochemistries. RareVar enables sensitive detection of low-frequency SNVs across different sequencing platforms and will facilitate research and clinical applications such as pooled sequencing, cancer early detection, prognostic assessment, metastatic monitoring, and relapses or acquired resistance identification.
115

Regression Models to Predict Coastdown Road Load for Various Vehicle Types

Singh, Yuvraj January 2020 (has links)
No description available.
116

[en] MEASUREMENTS AND MODELS FOR THE PROPAGATION LOSS AND RECEPTION QUALITY IN MOBILE BROADCAST SYSTEMS / [pt] MEDIÇÕES E MODELAGEM DA PERDA DE PROPAGAÇÃO E QUALIDADE DE RECEPÇÃO EM SISTEMAS DE RADIODIFUSÃO COM MOBILIDADE

CLARA ELIZABETH VERDUGO MUNOZ 09 August 2017 (has links)
[pt] O Ministério da Comunicações vem incentivando a realização de avaliações com sistemas de radiodifusão sonora digital, visando a futura decisão para a escolha do Padrão de Rádio Digital a ser adotado no País. Nos anos de 2012 a 2014 foram realizadas pelo CETUC, em parceria com o Ministério da Comunicações, a Anatel e o Inmetro, campanhas de medições nas faixas de ondas médias e VHF em algumas cidades do país, para avaliar os padrões de radiodifusão digital e subsidiar a decisão em curso sobre o padrão a ser adotado no Brasil. As campanhas envolveram medições estáticas e medições com mobilidade. Os dados das medições estáticas já foram analisados e os resultados divulgados. Neste trabalho são analisados os resultados das medições móveis. A primeira parte desse estudo trata da comparação dos resultados experimentais com os modelos de predição semiempíricos. Posteriormente, a análise estatística da variabilidade do sinal recebido em termos dos desvanecimentos de larga e pequena escala. Na parte final, a qualidade de recepção e a cobertura do sinal digital foram analisados a partir dos dados das medições. / [en] The Communications Ministry has been encouraging evaluations with digital sound broadcasting systems, aiming at the future decision for the Digital Radio Standard choice to be adopted in Brazil. In 2012 to 2014 were carried out by CETUC, in partnership with the Communications Ministry, Anatel and Inmetro, measurements campaigns at the medium and VHF bands in some cities of the country, to evaluate the digital broadcasting standards and to subsidize the current decision on the standard to be adopted in Brazil. Static and mobility measurements were involved at the campaign. The static measurement data have already been analyzed and the results reported. In this work, the mobile measurements results are analyzed. At the first part of this study deals with the comparison between experimental results with prediction models. Subsequently, statistical analysis of the received signal variability in terms of large and small scale fading were done. In the final part, reception quality and digital signal coverage were examined from the measurement data.
117

Multi-level Safety Performance Functions For High Speed Facilities

Ahmed, Mohamed 01 January 2012 (has links)
High speed facilities are considered the backbone of any successful transportation system; Interstates, freeways, and expressways carry the majority of daily trips on the transportation network. Although these types of roads are relatively considered the safest among other types of roads, they still experience many crashes, many of which are severe, which not only affect human lives but also can have tremendous economical and social impacts. These facts signify the necessity of enhancing the safety of these high speed facilities to ensure better and efficient operation. Safety problems could be assessed through several approaches that can help in mitigating the crash risk on long and short term basis. Therefore, the main focus of the research in this dissertation is to provide a framework of risk assessment to promote safety and enhance mobility on freeways and expressways. Multi-level Safety Performance Functions (SPFs) were developed at the aggregate level using historical crash data and the corresponding exposure and risk factors to identify and rank sites with promise (hot-spots). Additionally, SPFs were developed at the disaggregate level utilizing real-time weather data collected from meteorological stations located at the freeway section as well as traffic flow parameters collected from different detection systems such as Automatic Vehicle Identification (AVI) and Remote Traffic Microwave Sensors (RTMS). These disaggregate SPFs can identify real-time risks due to turbulent traffic conditions and their interactions with other risk factors. In this study, two main datasets were obtained from two different regions. Those datasets comprise historical crash data, roadway geometrical characteristics, aggregate weather and traffic parameters as well as real-time weather and traffic data. iii At the aggregate level, Bayesian hierarchical models with spatial and random effects were compared to Poisson models to examine the safety effects of roadway geometrics on crash occurrence along freeway sections that feature mountainous terrain and adverse weather. At the disaggregate level; a main framework of a proactive safety management system using traffic data collected from AVI and RTMS, real-time weather and geometrical characteristics was provided. Different statistical techniques were implemented. These techniques ranged from classical frequentist classification approaches to explain the relationship between an event (crash) occurring at a given time and a set of risk factors in real time to other more advanced models. Bayesian statistics with updating approach to update beliefs about the behavior of the parameter with prior knowledge in order to achieve more reliable estimation was implemented. Also a relatively recent and promising Machine Learning technique (Stochastic Gradient Boosting) was utilized to calibrate several models utilizing different datasets collected from mixed detection systems as well as real-time meteorological stations. The results from this study suggest that both levels of analyses are important, the aggregate level helps in providing good understanding of different safety problems, and developing policies and countermeasures to reduce the number of crashes in total. At the disaggregate level, real-time safety functions help toward more proactive traffic management system that will not only enhance the performance of the high speed facilities and the whole traffic network but also provide safer mobility for people and goods. In general, the proposed multi-level analyses are useful in providing roadway authorities with detailed information on where countermeasures must be implemented and when resources should be devoted. The study also proves that traffic data collected from different detection systems could be a useful asset that should be utilized iv appropriately not only to alleviate traffic congestion but also to mitigate increased safety risks. The overall proposed framework can maximize the benefit of the existing archived data for freeway authorities as well as for road users.
118

Study of Fragility Functions for Assessing Damage to Water Pipe Networks Caused by Earthquake Loading

Merlo, Dylan Joseph 01 April 2021 (has links) (PDF)
The performance of water lifelines during seismic events is an area of ongoing research. In this study we evaluate eight (8) different seismic events and the impact that ground shaking and ground deformations had on water pipeline systems. The overall goal of this work is to provide municipalities and utility providers with tools for mitigating the consequences of seismic hazards on water lifeline systems by analyzing the accuracy of damage estimation models. Three (3) different repair rate models are evaluated using data collected from the seismic events and compared to observed repair rate data. Results are analyzed to examine the utility of the models for forecasting damage. Results are shown. The overall goal of this work is to provide municipalities and utility providers with tools for mitigating the consequences of seismic hazards on water lifeline systems by analyzing the accuracy of damage estimation models. Results indicate that fragility functions that utilize a linear PGV-based function are the most accurate in predicting repair rates to a system based on residual plots developed for different models. Differentiating between continuous and segmented water lifeline systems is best done by using coefficients to modify the backbone PGV-based equation. Results also indicate that utilizing an additional PGD-based function could increase the predictive capabilities of water lifeline system fragility functions.
119

EDIFES 0.4: Scalable Data Analytics for Commercial Building Virtual Energy Audits

Pickering, Ethan M. 13 September 2016 (has links)
No description available.
120

Conception en vue de test de convertisseurs de signal analogique-numérique de type pipeline. / Design for test of pipelined analog to digital converters.

Laraba, Asma 20 September 2013 (has links)
La Non-Linéarité-Différentielle (NLD) et la Non-Linéarité-Intégrale (NLI) sont les performances statiques les plus importantes des Convertisseurs Analogique-Numérique (CAN) qui sont mesurées lors d’un test de production. Ces deux performances indiquent la déviation de la fonction de transfert du CAN par rapport au cas idéal. Elles sont obtenues en appliquant une rampe ou une sinusoïde lente au CAN et en calculant le nombre d’occurrences de chacun des codes du CAN.Ceci permet la construction de l’histogramme qui permet l’extraction de la NLD et la NLI. Cette approche requiert lacollection d’une quantité importante de données puisque chacun des codes doit être traversé plusieurs fois afin de moyenner le bruit et la quantité de données nécessaire augmente exponentiellement avec la résolution du CAN sous test. En effet,malgré que les circuits analogiques et mixtes occupent une surface qui n’excède pas généralement 5% de la surface globald’un System-on-Chip (SoC), leur temps de test représente souvent plus que 30% du temps de test global. Pour cette raison, la réduction du temps de test des CANs est un domaine de recherche qui attire de plus en plus d’attention et qui est en train deprendre de l’ampleur. Les CAN de type pipeline offrent un bon compromis entre la vitesse, la résolution et la consommation.Ils sont convenables pour une variété d’applications et sont typiquement utilisés dans les SoCs destinés à des applicationsvidéo. En raison de leur façon particulière du traitement du signal d’entrée, les CAN de type pipeline ont des codes de sortiequi ont la même largeur. Par conséquent, au lieu de considérer tous les codes lors du test, il est possible de se limiter à un sous-ensemble, ce qui permet de réduire considérablement le temps de test. Dans ce travail, une technique pour l’applicationdu test à code réduit pour les CANs de type pipeline est proposée. Elle exploite principalement deux propriétés de ce type deCAN et permet d’obtenir une très bonne estimation des performances statiques. La technique est validée expérimentalementsur un CAN 11-bit, 55nm de STMicroelectronics, obtenant une estimation de la NLD et de la NLI pratiquement identiques àla NLD et la NLI obtenues par la méthode classique d’histogramme, en utilisant la mesure de seulement 6% des codes. / Differential Non Linearity (DNL) and Integral Non Linearity (INL) are the two main static performances ofAnalog to-Digital Converters (ADCs) typically measured during production testing. These two performances reflect thedeviation of the transfer curve of the ADC from its ideal form. In a classic testing scheme, a saturated sine-wave or ramp isapplied to the ADC and the number of occurrences of each code is obtained to construct the histogram from which DNL andINL can be readily calculated. This standard approach requires the collection of a large volume of data because each codeneeds to be traversed many times to average noise. Furthermore, the volume of data increases exponentially with theresolution of the ADC under test. According to recently published data, testing the mixed-signal functions (e.g. dataconverters and phase locked loops) of a System-on-Chip (SoC) contributes to more than 30% of the total test time, althoughmixed-signal circuits occupy a small fraction of the SoC area that typically does not exceed 5%. Thus, reducing test time forADCs is an area of industry focus and innovation. Pipeline ADCs offer a good compromise between speed, resolution, andpower consumption. They are well-suited for a variety of applications and are typically present in SoCs intended for videoapplications. By virtue of their operation, pipeline ADCs have groups of output codes which have the same width. Thus,instead of considering all the codes in the testing procedure, we can consider measuring only one code out of each group,thus reducing significantly the static test time. In this work, a technique for efficiently applying reduced code testing onpipeline ADCs is proposed. It exploits two main properties of the pipeline ADC architecture and allows obtaining an accurateestimation of the static performances. The technique is validated on an experimental 11-bit, 55nm pipeline ADC fromSTMicroelectronics, resulting in estimated DNL and INL that are practically indistinguishable from DNL and INL that areobtained with the standard histogram technique, while measuring only 6% of the codes.

Page generated in 0.1227 seconds