Spelling suggestions: "subject:"3dmodeling anda 2analysis"" "subject:"3dmodeling anda 3analysis""
21 |
Fault-Tolerant Control of Unmanned Underwater VehiclesNi, Lingli 03 July 2001 (has links)
Unmanned Underwater Vehicles (UUVs) are widely used in commercial, scientific, and military missions for various purposes. What makes this technology challenging is the increasing mission duration and unknown environment. It is necessary to embed fault-tolerant control paradigms into UUVs to increase the reliability of the vehicles and enable them to execute and finalize complex missions. Specifically, fault-tolerant control (FTC) comprises fault detection, identification, and control reconfiguration for fault compensation. Literature review shows that there have been no systematic methods for fault-tolerant control of UUVs in earlier investigations. This study presents a hierarchical methodology of fault detection, identification and compensation (HFDIC) that integrates these functions systematically in different levels. The method uses adaptive finite-impulse-response (FIR) modeling and analysis in its first level to detect failure occurrences. Specifically, it incorporates a FIR filter for on-line adaptive modeling, and a least-mean-squares (LMS) algorithm to minimize the output error between the monitored system and the filter in the modeling process. By analyzing the resulting adaptive filter coefficients, we extract the information on the fault occurrence. The HFDIC also includes a two-stage design of parallel Kalman filters in levels two and three for fault identification using the multiple-model adaptive estimation (MMAE). The algorithm activates latter levels only when the failure is detected, and can return back to the monitoring loop in case of false failures. On the basis of MMAE, we use multiple sliding-mode controllers and reconfigure the control law with a probability-weighted average of all the elemental control signals, in order to compensate for the fault.
We validate the HFDIC on the steering and diving subsystems of Naval Postgraduate School (NPS) UUVs for various simulated actuator and/or sensor failures, and test the hierarchical fault detection and identification (HFDI) with realistic data from at-sea experiment of the Florida Atlantic University (FAU) Autonomous Underwater Vehicles (AUVs). For both occasions, we model actuator and sensor failures as additive parameter changes in the observation matrix and the output equation, respectively.
Simulation results demonstrate the ability of the HFDIC to detect failures in real time, identify failures accurately with a low computational overhead, and compensate actuator and sensor failures with control reconfiguration. In particular, verification of HFDI with FAU data confirms the performance of the fault detection and identification methodology, and provides important information on the vehicle performance. / Ph. D.
|
22 |
Staple Crop Diversity and Risk Mitigation - Potatoes in BoliviaCastelhano, Michael Joseph 18 November 2008 (has links) (PDF)
Rural areas of most developing nations are dependent on agriculture. In the most remote areas, sometimes referred to as the "less favored areas" (LFAs), the economic importance of agriculture is paramount. An important obstacle to development in these areas is that agriculture is at the mercy of nature, which may not be particularly friendly. These areas have remained remote due to natural shortcomings causing economic development to occur slower than more advantaged areas elsewhere. Cochabamba Department, in central Bolivia, is home to some of these LFAs. Most Cochabamban producers are located in the "high climatic risk" (CIP-WPA) Andean highlands. Farmers in LFAs surrounding Cochabamba city produce (among other things) potatoes for market and home consumption; the potato is the main source of food and income for most residents. Previous studies and anecdotal evidence have shown that Andean potato farmers may plant upwards of 10 varieties of potatoes on small amounts of land (Brush, 92). Because of the low rates of improved crop variety adoption in many LFA's, efforts are needed to understand farmer objectives and needs with respect to variety characteristics. The goal of this study is to determine how exposure to risk factors impacts potato planting decisions through demand for potato variety characteristics. The main source of data for this project is a survey of 145 farm households implemented during the last quarter of 2007 in 3 communities of Cochabamba. These data were used to estimate an econometric model that evaluated the role of household, regional and variety characteristics in farmer decision making. Decisions about planting each variety were modeled with a Tobit framework and estimated by the Heckman method (as suggested by Cameron and Trivedi), with the impact of individual variety characteristics restricted to be the same for each variety. Several hypotheses were confirmed such as the importance of yield, though many results were different than expected. Blight tolerance was found to be negatively correlated with selection, although most farmers report taking some kind of action to decrease damage from blight. Possible explanations for this negative correlation are discussed in this paper, and strategies for overcoming these obstacles are suggested. / Master of Science
|
23 |
A Nonlinear Equivalent Frame Model For Displacement Based Analysis Of Unreinforced Brick Masonry BuildingsDemirel, Ismail Ozan 01 December 2010 (has links) (PDF)
Although performance based assessment procedures are mainly developed for reinforced concrete and steel buildings, URM buildings occupy significant portion of building stock in earthquake prone areas of the world as well as in Turkey. Variability of material properties, non-engineered nature of the construction and difficulties in structural analysis of perforated walls make analysis of URM buildings challenging. Despite sophisticated finite element models satisfy the modeling requirements, extensive experimental data for definition of material behavior and high computational resources are needed. Recently, nonlinear equivalent frame models which are developed assigning lumped plastic hinges to isotropic and homogenous equivalent frame elements are used for nonlinear modeling of URM buildings.
The work presented in this thesis is about performance assessment of unreinforced brick masonry buildings in Turkey through nonlinear equivalent frame modeling technique.
Reliability of the proposed model is tested with a reversed cyclic experiment conducted on a full scale, two-story URM building at the University of Pavia and a dynamic shake table test on a half scale, two story URM building at the Ismes Laboratory at Bergamo. Good agreement between numerical and experimental results is found.
Finally, pushover and nonlinear time history analyses of three unreinforced brick masonry buildings which are damaged in 1995 earthquake of Dinar is conducted using the proposed three dimensional nonlinear equivalent model. After displacement demands of the buildings are determined utilizing Turkish Earthquake Code 2007, performance based assessment of the buildings are done.
|
24 |
Manejo da paisagem em fragmentos de floresta de araucária no sul do Brasil com base no incremento diamétrico / Landscape management in Araucaria forest fragments in southern Brazil, based on diameter incrementLoiola, Táscilla Magalhães 19 February 2016 (has links)
Submitted by Claudia Rocha (claudia.rocha@udesc.br) on 2017-12-06T15:15:51Z
No. of bitstreams: 1
PGEF16MA055.pdf: 2636965 bytes, checksum: 1227c615c043f057d8dbc346cd119e1f (MD5) / Made available in DSpace on 2017-12-06T15:15:51Z (GMT). No. of bitstreams: 1
PGEF16MA055.pdf: 2636965 bytes, checksum: 1227c615c043f057d8dbc346cd119e1f (MD5)
Previous issue date: 2016-02-19 / FAPESC / The objective of this study was used the geostatistics and dendrochronology together with morphometric variables and dendrometric for, based on the diameter increment, evaluate the growth in Araucaria angustifolia in southern Brazil, adjust growth models and generate mapping the distribution the increase for landscape management aimed at sustainable interventions in its ecosystem. Data were collected in native forest fragments in four areas distributed in three municipalities in the mountainous plateau of Santa Catarina: São Joaquim, Urupema and Panel. Were sampled 256 trees, which gathered up the dendrometric and morphometric variables, as well as its position in space. The morphometric analysis index indicated that the species is different degrees of competition. Covariance analysis showed no difference in the shape-size ratio in sample areas. The crown insertion height correlated positively and better fit with the overall height, and negatively correlated with the proportion of canopy. The negative correlation with pc% indicates that a higher percentage of canopy corresponding to minor hic. The relationship between the proportion of crown according to the canopy trees with length indicates that greater length cup mantle and have a higher proportion of canopy thus improved growth capacity. For the crown diameter according to the diameter at breast height it was a positive correlation, that is, with the increase in size increases with the diameter of the crown. Interdimensional relationships analyzed by covariance showed differences between growing trees in the forest and free growth. growing trees without competition have greater and greater pc% cc than trees growing in competition, as well as have higher DC than on growth in the forest. It was possible to determine the potential crown diameter, growth space, the number of trees per hectare and basal area per hectare, serving as a resource for future interventions of forest management. For the analysis of increment in diameter and the age, we used the trunk partial analysis. In São Joaquim 1 the average for the mean annual increment in diameter was 0, 45 cm.ano-1 in São Joaquim 2, was 0,69 cm.ano-1, Urupema, 0,82 cm.ano- 1 and
Painel, 0,94 cm.ano-1. Covariance analysis showed no differences in the average annual increase in the study sites. The incremental adjustment in the diameter at breast height and age showed that the trees showed a gradual decrease in the increase with increase in diameter and advancing age. Pearson correlation analysis for the annual periodic increment in basal area with morphometric variables and dendrometric showed that the variables with the highest correlation were proportion of cup and diameter, with a positive correlation value of 0,40 and 0,30. The generalized linear model Gamma - identity presented the best statistical criteria in setting annual periodic increment in basal area by diameter, percentage of canopy and height. The use of cup dimensions variables can be inserted in the modeling of the annual periodic increment in basal area of Araucaria angustifolia. In geostatistical analysis initially evaluated the data from classical statistics, then proceeded to the adjustment of the semivariogram. Later, we used the ordinary kriging for the interpolation of data. The standard deviation values show that there is greater variability in the panel data. There is a positive skew for all data, making it necessary to be transformed in some cases. The exponential model demonstrated better adjustment to the areas of São Joaquin and Painel already in Urupema the best model resulted in the spherical model. With the data interpolation maps was possible to visualize the spatial distribution of mean annual increment in diameter covered the four sites, identifying the areas with the highest and lowest diameter increment. The results generated in this study can understand the structure and growth of distribution Araucaria in each study site, to facilitate the management of the landscape and the species in southern Brazil. Current legislation restricts the sustainable use of the species, its natural regeneration and the increase in the rates, so reforms are needed in the legislation to ensure the perpetuity of the type Araucaria Forest / O objetivo do presente trabalho foi utilizar a geoestatística e a dendrocronologia em conjunto com as variáveis morfométricas e dendrométricas para, com base no incremento diamétrico, avaliar o crescimento no tempo de Araucaria angustifolia no sul do Brasil, ajustar modelos de crescimento e gerar mapas da distribuição do incremento para manejo da paisagem visando intervenções sustentadas em seu ecossistema. Os dados foram coletados em fragmentos de floresta nativa, em quatro áreas distribuídas em três municípios do planalto serrano de Santa Catarina: São Joaquim, Urupema e Painel. Foram amostradas 256 árvores, das quais coletou-se as variáveis dendrométricas e morfométricas, como também seu posicionamento no espaço. A análise dos índices morfométricos indicou que a espécie encontra-se em diferentes graus de competição. A análise de covariância demonstrou que há diferença na relação forma-dimensão nas áreas de amostragem. A altura de inserção de copa apresentou correlação positiva e melhor ajuste com a altura total, e correlação negativa com a proporção de copa. A correlação negativa com pc% indica que um maior percentual de copa corresponde a menor hic. A relação entre a proporção de copa em função do comprimento de copa indica que árvores com maior comprimento de copa apresentam maior manto e proporção de copa, consequentemente, melhor capacidade de crescimento. Para o diâmetro de copa em função do diâmetro à altura do peito a correlação foi positiva, ou seja, com o aumento em dimensão aumenta proporcionalmente o diâmetro de copa. As relações interdimensionais, analisadas pela covariância, demonstraram diferenças entre árvores de crescimento na floresta e de crescimento livre. Árvores crescendo sem competição apresentam maior pc% e maior cc do que árvores crescendo em competição, assim como, apresentam maior dc do que sob crescimento no interior da floresta. Foi possível determinar o diâmetro de copa potencial, o espaço de crescimento, o número de árvores por hectare e área basal por hectare, servindo como subsídio para futuras intervenções de manejo florestal. Para a análise do
incremento em diâmetro e da idade, utilizou-se da análise parcial de tronco. Em São Joaquim 1 a média para o incremento médio anual em diâmetro foi de 0,45 cm.ano-1, em São Joaquim 2, foi de 0,69 cm.ano-1, em Urupema, 0,82 cm.ano-1 e em Painel, 0,94 cm.ano-1. A análise de covariância mostrou existir diferenças no incremento médio anual nos sítios de estudo. O ajuste do incremento em função do diâmetro à altura do peito e da idade mostrou que as árvores apresentaram diminuição gradativa do incremento com aumento do diâmetro e avanço da idade. A análise de correlação de Pearson para o incremento periódico anual em área basal com as variáveis morfométricas e dendrométricas, demonstrou que as variáveis com maior correlação foram proporção de copa e diâmetro, com correlação positiva de valor 0,40 e 0,30. O modelo linear generalizado Gamma - identidade apresentou os melhores critérios estatísticos no ajuste do incremento periódico anual em área basal em função do diâmetro, percentual de copa e altura. O uso de variáveis de dimensões da copa, podem ser inseridas na modelagem do incremento periódico anual em área basal de Araucaria angustifolia. Na análise geoestatística inicialmente avaliou-se os dados a partir da estatística clássica, em seguida procedeu-se o ajuste dos semivariogramas. Posteriormente, utilizou-se da Krigagem ordinária para a interpolação dos dados. Os valores do desvio padrão mostram que há maior variabilidade nos dados de Painel. Existe uma assimetria positiva para todos os dados, tornando necessária a transformação dos mesmos em alguns casos. O modelo exponencial demostrou melhor ajuste para as áreas de São Joaquim e Painel, já em Urupema o melhor modelo resultou no modelo esférico. Com os mapas de interpolação dos dados foi possível visualizar a distribuição espacial do incremento médio anual em diâmetro nos quatro sítios abordados, identificando as áreas com maior e menor incremento em diâmetro. Os resultados gerados neste trabalho possibilitam perceber a estrutura e a distribuição de crescimento da araucária em cada sítio de estudo, contribuindo para o manejo da paisagem e da espécie no sul do Brasil. A legislação atual restringe o uso sustentável da espécie, sua regeneração natural e o aumento nas taxas de incremento, assim, são necessárias reformas na legislação vigente para garantir a perpetuidade da tipologia Floresta com Araucária
|
25 |
[en] ANALYSIS AND PROPOSITIONS FOR THE OPERATION MODEL DESIGN OF A TRANSPLANTATION UNIT INSERTED IN THE KIDNEY TRANSPLANT NETWORK OF THE STATE OF RIO DE JANEIRO / [pt] ANÁLISE E PROPOSIÇÕES PARA O PROJETO DO MODELO DE OPERAÇÃO DE UMA UNIDADE TRANSPLANTADORA INSERIDA NA REDE DE TRANSPLANTE RENAL DO ESTADO DO RIO DE JANEIROANA CAROLINA PEREIRA DE V SILVA 14 June 2018 (has links)
[pt] As doenças do rim e trato urinário contribuem com cerca de 850 mil mortes a cada ano, sendo a décima segunda causa de morte do mundo. No Brasil, a diálise ainda é o procedimento mais utilizado, apesar de o transplante ser a modalidade mais recomendada, por oferecer melhor qualidade de vida ao paciente, uma possível redução do risco de mortalidade e menor custo que a diálise. Uma vez na fila, o paciente ainda se depara um conjunto de ineficiências do Sistema Nacional de Transplante. A presente pesquisa identifica que uma delas é o desalinhamento entre os atores do transplante (doador, receptor e unidade transplantadora) e que há um gap no que tange às atividades do ator unidades transplantadoras. Dessa forma, o objetivo da presente pesquisa é investigar, à luz da gestão de operações, o modelo de uma unidade transplantadora de referência, inserida na rede de transplante renal do estado do Rio de Janeiro. A partir do levantamento da literatura e de campo, são realizadas modelagens do processo de transplante renal, descrição e diagnóstico do modelo de operação da unidade transplantadora e proposições acerca do projeto de operação da unidade estudada / [en] Diseases of the kidney and urinary tract contribute about 850,000 deaths each year, being the 12th leading cause of death in the world. In Brazil, dialysis is still the most used procedure, although transplantation is the most recommended modality, because it offers a better quality of life for the patient, a possible reduction of mortality risk and lower cost than dialysis. Once in the queue, the patient still faces a set of inefficiencies of the National Transplant System. The present research identifies that one of them is the misalignment between the actors of the transplant (donor, receiver and transplantation unit) and that there is a gap with respect to the activities of the transplantation unit actors. Thus, the objective of this research is to investigate, in the light of operations management, the model of a reference transplantation unit, inserted in the kidney transplant network of the state of Rio de Janeiro. From the literature and field survey, modeling of the renal transplantation process, description and diagnosis of the operation model of the transplantation unit and propositions about the operation project of the studied unit are performed.
|
26 |
Identification of Availability and Performance Bottlenecks in Cloud Computing Systems: an approach based on hierarchical models and sensitivity analysis.MATOS JÚNIOR, Rubens de Souza 01 March 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-04T17:58:30Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5) / Made available in DSpace on 2017-05-04T17:58:30Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5)
Previous issue date: 2016-03-01 / CAPES / Cloud computing paradigm is able to reduce costs of acquisition and maintenance of
computer systems, and enables the balanced management of resources according to the demand.
Hierarchical and composite analytical models are suitable for describing performance and dependability
of cloud computing systems in a concise manner, dealing with the huge number
of components which constitute such kind of system. That approach uses distinct sub-models
for each system level and the measures obtained in each sub-model are integrated to compute
the measures for the whole system. Identification of bottlenecks in hierarchical models might
be difficult yet, due to the large number of parameters and their distribution among distinct
modeling levels and formalisms. This thesis proposes methods for evaluation and detection of
bottlenecks of cloud computing systems. The methodology is based on hierarchical modeling
and parametric sensitivity analysis techniques tailored for such a scenario. This research introduces
methods to build unified sensitivity rankings when distinct modeling formalisms are
combined. These methods are embedded in the Mercury software tool, providing an automated
sensitivity analysis framework for supporting the process. Distinct case studies helped in testing
the methodology, encompassing hardware and software aspects of cloud systems, from basic infrastructure
level to applications that are hosted in private clouds. The case studies showed that
the proposed approach is helpful for guiding cloud systems designers and administrators in the
decision-making process, especially for tune-up and architectural improvements. It is possible
to employ the methodology through an optimization algorithm proposed here, called Sensitive
GRASP. This algorithm aims at optimizing performance and dependability of computing systems
that cannot stand the exploration of all architectural and configuration possibilities to find
the best quality of service. This is especially useful for cloud-hosted services and their complex
underlying infrastructures. / O paradigma de computação em nuvem é capaz de reduzir os custos de aquisição e
manutenção de sistemas computacionais e permitir uma gestão equilibrada dos recursos de
acordo com a demanda. Modelos analíticos hierárquicos e compostos são adequados para
descrever de forma concisa o desempenho e a confiabilidade de sistemas de computação em
nuvem, lidando com o grande número de componentes que constituem esse tipo de sistema.
Esta abordagem usa sub-modelos distintos para cada nível do sistema e as medidas obtidas
em cada sub-modelo são usadas para calcular as métricas desejadas para o sistema como um
todo. A identificação de gargalos em modelos hierárquicos pode ser difícil, no entanto, devido
ao grande número de parâmetros e sua distribuição entre os distintos formalismos e níveis de
modelagem. Esta tese propõe métodos para a avaliação e detecção de gargalos de sistemas de
computação em nuvem. A abordagem baseia-se na modelagem hierárquica e técnicas de análise
de sensibilidade paramétrica adaptadas para tal cenário. Esta pesquisa apresenta métodos para
construir rankings unificados de sensibilidade quando formalismos de modelagem distintos são
combinados. Estes métodos são incorporados no software Mercury, fornecendo uma estrutura
automatizada de apoio ao processo. Uma metodologia de suporte a essa abordagem foi proposta
e testada ao longo de estudos de casos distintos, abrangendo aspectos de hardware e software
de sistemas IaaS (Infraestrutura como um serviço), desde o nível de infraestrutura básica até os
aplicativos hospedados em nuvens privadas. Os estudos de caso mostraram que a abordagem
proposta é útil para orientar os projetistas e administradores de infraestruturas de nuvem no
processo de tomada de decisões, especialmente para ajustes eventuais e melhorias arquiteturais.
A metodologia também pode ser aplicada por meio de um algoritmo de otimização proposto
aqui, chamado Sensitive GRASP. Este algoritmo tem o objetivo de otimizar o desempenho e a
confiabilidade de sistemas em cenários onde não é possível explorar todas as possibilidades arquiteturais
e de configuração para encontrar a melhor qualidade de serviço. Isto é especialmente
útil para os serviços hospedados na nuvem e suas complexas
|
27 |
Bayesian Nonparametric Modeling of Temporal Coherence for Entity-Driven Video AnalyticsMitra, Adway January 2015 (has links) (PDF)
In recent times there has been an explosion of online user-generated video content. This has generated significant research interest in video analytics. Human users understand videos based on high-level semantic concepts. However, most of the current research in video analytics are driven by low-level features and descriptors, which often lack semantic interpretation. Existing attempts in semantic video analytics are specialized and require additional resources like movie scripts, which are not available for most user-generated videos. There are no general purpose approaches to understanding videos through semantic concepts.
In this thesis we attempt to bridge this gap. We view videos as collections of entities which are semantic visual concepts like the persons in a movie, or cars in a F1 race video. We focus on two fundamental tasks in Video Understanding, namely summarization and scene- discovery. Entity-driven Video Summarization and Entity-driven Scene discovery are important open problems. They are challenging due to the spatio-temporal nature of videos, and also due to lack of apriori information about entities. We use Bayesian nonparametric methods to solve these problems. In the absence of external resources like scripts we utilize fundamental structural properties like temporal coherence in videos- which means that adjacent frames should contain the same set of entities and have similar visual features. There have been no focussed attempts to model this important property. This thesis makes several contributions in Computer Vision and Bayesian nonparametrics by addressing Entity-driven Video Understanding through temporal coherence modeling.
Temporal Coherence in videos is observed across its frames at the level of features/descriptors, as also at semantic level. We start with an attempt to model TC at the level of features/descriptors. A tracklet is a spatio-temporal fragment of a video- a set of spatial regions in a short sequence (5-20) of consecutive frames, each of which enclose a particular entity. We attempt to find a representation of tracklets to aid tracking of entities. We explore region descriptors like Covari- ance Matrices of spatial features in individual frames. Due to temporal coherence, such matrices from corresponding spatial regions in successive frames have nearly identical eigenvectors. We utilize this property to model a tracklet using a covariance matrix, and use it for region-based entity tracking. We propose a new method to estimate such a matrix. Our method is found to be much more efficient and effective than alternative covariance-based methods for entity tracking.
Next, we move to modeling temporal coherence at a semantic level, with special emphasis on videos of movies and TV-series episodes. Each tracklet is associated with an entity (say a particular person). Spatio-temporally close but non-overlapping tracklets are likely to belong to the same entity, while tracklets that overlap in time can never belong to the same entity. Our aim is to cluster the tracklets based on the entities associated with them, with the goal of discovering the entities in a video along with all their occurrences. We argue that Bayesian Nonparametrics is the most convenient way for this task. We propose a temporally coherent version of Chinese Restaurant Process (TC-CRP) that can encode such constraints easily, and results in discovery of pure clusters of tracklets, and also filter out tracklets resulting from false detections. TC-CRP shows excellent performance on person discovery from TV-series videos. We also discuss semantic video summarization, based on entity discovery.
Next, we consider entity-driven temporal segmentation of a video into scenes, where each scene is characterized by the entities present in it. This is a novel application, as existing work on temporal segmentation have focussed on low-level features of frames, rather than entities. We propose EntScene: a generative model for videos based on entities and scenes, and propose an inference algorithm based on Blocked Gibbs Sampling, for simultaneous entity discovery and scene discovery. We compare it to alternative inference algorithms, and show significant improvements in terms of segmentatio and scene discovery.
Video representation by low-rank matrix has gained popularity recently, and has been used for various tasks in Computer Vision. In such a representation, each column corresponds to a frame or a single detection. Such matrices are likely to have contiguous sets of identical columns due to temporal coherence, and hence they should be low-rank. However, we discover that none of the existing low-rank matrix recovery algorithms are able to preserve such structures. We study regularizers to encourage these structures for low-rank matrix recovery through convex optimization, but note that TC-CRP-like Bayesian modeling is better for enforcing them.
We then focus our attention on modeling temporal coherence in hierarchically grouped sequential data, such as word-tokens grouped into sentences, paragraphs, documents etc in a text corpus. We attempt Bayesian modeling for such data, with application to multi-layer segmentation. We first make a detailed study of existing models for such data. We present a taxonomy for such models called Degree-of-Sharing (DoS), based on how various mixture components are shared by the groups of data in these models. We come up with Layered Dirichlet Process which generalizes Hierarchical Dirichlet Process to multiple layers, and can also handle sequential information easily through Markovian approach. This is applied to hierarchical co-segmentation of a set of news transcripts- into broad categories (like politics, sports etc) and individual stories. We also propose a explicit-duration (semi-Markov) approach for this purpose, and provide an efficient inference algorithm for this. We also discuss generative processes for distribution matrices, where each column is a probability distribution. For this we discuss an application: to infer the correct answers to questions on online answering forums from opinions provided by different users.
|
28 |
Optimalizace vodovodní sítě města Počátky / Optimization of the Water Supply Network of Počátky TownPavelka, David January 2020 (has links)
This diploma thesis focuses on the optimization of water supply system in the town of Počátky. It describes the process of creating a mathematical model needed for the hydraulic analysis, which was used to assess the water supply system Počátky. This thesis task is to apprise reader, with a basic distribution in the drinking water supply, how to proceed in collecting data on hydraulic analysis requirements and using tools and means used in hydraulic analysis. Conclusions are processed variants for possible optimization of water supply system Počátky.
|
29 |
Development of Regional Optimization and Market Penetration Models For Electric Vehicles in the United StatesNoori, Mehdi 01 January 2015 (has links)
Since the transportation sector still relies mostly on fossil fuels, the emissions and overall environmental impacts of the transportation sector are particularly relevant to the mitigation of the adverse effects of climate change. Sustainable transportation therefore plays a vital role in the ongoing discussion on how to promote energy insecurity and address future energy requirements. One of the most promising ways to increase energy security and reduce emissions from the transportation sector is to support alternative fuel technologies, including electric vehicles (EVs). As vehicles become electrified, the transportation fleet will rely on the electric grid as well as traditional transportation fuels for energy. The life cycle cost and environmental impacts of EVs are still very uncertain, but are nonetheless extremely important for making policy decisions. Moreover, the use of EVs will help to diversify the fuel mix and thereby reduce dependence on petroleum. In this respect, the United States has set a goal of a 20% share of EVs on U.S. roadways by 2030. However, there is also a considerable amount of uncertainty in the market share of EVs that must be taken into account. This dissertation aims to address these inherent uncertainties by presenting two new models: the Electric Vehicles Regional Optimizer (EVRO), and Electric Vehicle Regional Market Penetration (EVReMP). Using these two models, decision makers can predict the optimal combination of drivetrains and the market penetration of the EVs in different regions of the United States for the year 2030. First, the life cycle cost and life cycle environmental emissions of internal combustion engine vehicles, gasoline hybrid electric vehicles, and three different EV types (gasoline plug-in hybrid EVs, gasoline extended-range EVs, and all-electric EVs) are evaluated with their inherent uncertainties duly considered. Then, the environmental damage costs and water footprints of the studied drivetrains are estimated. Additionally, using an Exploratory Modeling and Analysis method, the uncertainties related to the life cycle costs, environmental damage costs, and water footprints of the studied vehicle types are modeled for different U.S. electricity grid regions. Next, an optimization model is used in conjunction with this Exploratory Modeling and Analysis method to find the ideal combination of different vehicle types in each U.S. region for the year 2030. Finally, an agent-based model is developed to identify the optimal market shares of the studied vehicles in each of 22 electric regions in the United States. The findings of this research will help policy makers and transportation planners to prepare our nation*s transportation system for the future influx of EVs. The findings of this research indicate that the decision maker*s point of view plays a vital role in selecting the optimal fleet array. While internal combustion engine vehicles have the lowest life cycle cost, the highest environmental damage cost, and a relatively low water footprint, they will not be a good choice in the future. On the other hand, although all-electric vehicles have a relatively low life cycle cost and the lowest environmental damage cost of the evaluated vehicle options, they also have the highest water footprint, so relying solely on all-electric vehicles is not an ideal choice either. Rather, the best fleet mix in 2030 will be an electrified fleet that relies on both electricity and gasoline. From the agent-based model results, a deviation is evident between the ideal fleet mix and that resulting from consumer behavior, in which EV shares increase dramatically by the year 2030 but only dominate 30 percent of the market. Therefore, government subsidies and the word-of-mouth effect will play a vital role in the future adoption of EVs.
|
30 |
Automatic Generation Of Supply Chain Simulation Models From Scor Based OntologiesCope, Dayana 01 January 2008 (has links)
In today's economy of global markets, supply chain networks, supplier/customer relationship management and intense competition; decision makers are faced with a need to perform decision making using tools that do not accommodate the nature of the changing market. This research focuses on developing a methodology that addresses this need. The developed methodology provides supply chain decision makers with a tool to perform efficient decision making in stochastic, dynamic and distributed supply chain environments. The integrated methodology allows for informed decision making in a fast, sharable and easy to use format. The methodology was implemented by developing a stand alone tool that allows users to define a supply chain simulation model using SCOR based ontologies. The ontology includes the supply chain knowledge and the knowledge required to build a simulation model of the supply chain system. A simulation model is generated automatically from the ontology to provide the flexibility to model at various levels of details changing the model structure on the fly. The methodology implementation is demonstrated and evaluated through a retail oriented case study. When comparing the implementation using the developed methodology vs. a "traditional" simulation methodology approach, a significant reduction in definition and execution time was observed.
|
Page generated in 0.1119 seconds