1 |
Driving Cycle Generation Using Statistical Analysis and Markov ChainsTorp, Emil, Önnegren, Patrik January 2013 (has links)
A driving cycle is a velocity profile over time. Driving cycles can be used for environmental classification of cars and to evaluate vehicle performance. The benefit by using stochastic driving cycles instead of predefined driving cycles, i.e. the New European Driving Cycle, is for instance that the risk of cycle beating is reduced. Different methods to generate stochastic driving cycles based on real-world data have been used around the world, but the representativeness of the generated driving cycles has been difficult to ensure. The possibility to generate stochastic driving cycles that captures specific features from a set of real-world driving cycles is studied. Data from more than 500 real-world trips has been processed and categorized. The driving cycles are merged into several transition probability matrices (TPMs), where each element corresponds to a specific state defined by its velocity and acceleration. The TPMs are used with Markov chain theory to generate stochastic driving cycles. The driving cycles are validated using percentile limits on a set of characteristic variables, that are obtained from statistical analysis of real-world driving cycles. The distribution of the generated driving cycles is investigated and compared to real-world driving cycles distribution. The generated driving cycles proves to represent the original set of real-world driving cycles in terms of key variables determined through statistical analysis. Four different methods are used to determine which statistical variables that describes the features of the provided driving cycles. Two of the methods uses regression analysis. Hierarchical clustering of statistical variables is proposed as a third alternative, and the last method combines the cluster analysis with the regression analysis. The entire process is automated and a graphical user interface is developed in Matlab to facilitate the use of the software. / En körcykel är en beskriving av hur hastigheten för ett fordon ändras under en körning. Körcykler används bland annat till att miljöklassa bilar och för att utvärdera fordonsprestanda. Olika metoder för att generera stokastiska körcykler baserade på verklig data har använts runt om i världen, men det har varit svårt att efterlikna naturliga körcykler. Möjligheten att generera stokastiska körcykler som representerar en uppsättning naturliga körcykler studeras. Data från över 500 körcykler bearbetas och kategoriseras. Dessa används för att skapa överergångsmatriser där varje element motsvarar ett visst tillstånd, med hastighet och acceleration som tillståndsvariabler. Matrisen tillsammans med teorin om Markovkedjor används för att generera stokastiska körcykler. De genererade körcyklerna valideras med hjälp percentilgränser för ett antal karaktäristiska variabler som beräknats för de naturliga körcyklerna. Hastighets- och accelerationsfördelningen hos de genererade körcyklerna studeras och jämförs med de naturliga körcyklerna för att säkerställa att de är representativa. Statistiska egenskaper jämfördes och de genererade körcyklerna visade sig likna den ursprungliga uppsättningen körcykler. Fyra olika metoder används för att bestämma vilka statistiska variabler som beskriver de naturliga körcyklerna. Två av metoderna använder regressionsanalys. Hierarkisk klustring av statistiska variabler föreslås som ett tredje alternativ. Den sista metoden kombinerar klusteranalysen med regressionsanalysen. Hela processen är automatiserad och ett grafiskt användargränssnitt har utvecklats i Matlab för att underlätta användningen av programmet.
|
2 |
A Methodological Framework for Modeling Pavement Maintenance Costs for Projects with Performance-based ContractsPanthi, Kamalesh 12 November 2009 (has links)
Performance-based maintenance contracts differ significantly from material and method-based contracts that have been traditionally used to maintain roads. Road agencies around the world have moved towards a performance-based contract approach because it offers several advantages like cost saving, better budgeting certainty, better customer satisfaction with better road services and conditions. Payments for the maintenance of road are explicitly linked to the contractor successfully meeting certain clearly defined minimum performance indicators in these contracts. Quantitative evaluation of the cost of performance-based contracts has several difficulties due to the complexity of the pavement deterioration process. Based on a probabilistic analysis of failures of achieving multiple performance criteria over the length of the contract period, an effort has been made to develop a model that is capable of estimating the cost of these performance-based contracts. One of the essential functions of such model is to predict performance of the pavement as accurately as possible. Prediction of future degradation of pavement is done using Markov Chain Process, which requires estimating transition probabilities from previous deterioration rate for similar pavements. Transition probabilities were derived using historical pavement condition rating data, both for predicting pavement deterioration when there is no maintenance, and for predicting pavement improvement when maintenance activities are performed. A methodological framework has been developed to estimate the cost of maintaining road based on multiple performance criteria such as crack, rut and, roughness. The application of the developed model has been demonstrated via a real case study of Miami Dade Expressways (MDX) using pavement condition rating data from Florida Department of Transportation (FDOT) for a typical performance-based asphalt pavement maintenance contract. Results indicated that the pavement performance model developed could predict the pavement deterioration quite accurately. Sensitivity analysis performed shows that the model is very responsive to even slight changes in pavement deterioration rate and performance constraints. It is expected that the use of this model will assist the highway agencies and contractors in arriving at a fair contract value for executing long term performance-based pavement maintenance works.
|
3 |
Damping Factor Analysis for PageRankScheie, Fredrik January 2022 (has links)
The purpose of this thesis is to present research related to the damping factor in relation to the PageRank algorithm where a method of symbolic calculations is used to calculate eigenvalues, eigenvectors corresponding to the Google matrix in relation to both directed and undirected graphs. These graphs given comprise all the directed graphs up to four vertices and in addition the undirected graphs of five vertices are given in this thesis. A central research question has been to determine how $d$ behaves in relation to effecting the result of the dominant eigenvector for corresponding graphs such as to determine how the PageRank is directly influenced. A few selected graphs along with their calculations were extracted and analyzed in terms of the parameter $d$. For the calculations in this thesis probability matrices were constructed for all graphs and calculations were made using Matlab where eigenvalues, eigenvectors corresponding to the Google matrix were returned along with the input probability matrix and the Google matrix. In addition, the thesis contains a theoretical portion related to the theory behind PageRank along with relevant proofs, theorems and definitions which are used throughout the thesis. Some brief mention of the historical background and applications of the PageRank are also given. \bigskip A discussion of the results is provided involving the interaction of the damping factor with the dominant PageRank eigenvector. Lastly, a conclusion is given and future prospects relating to the topic of research is discussed. The work in this thesis is inspired by a previous work done by Silvestrov et al. in $2008$ where we have here placed further emphasis on the damping factor.
|
4 |
Hardware Implementation of the Expectation Maximization Algorithm for Motif FindingKoneru, Sushma January 2009 (has links)
No description available.
|
5 |
Integrating Data from Multiple Sources to Estimate Transit-Land Use Interactions and Time-Varying Transit Origin-Destination DemandLee, Sang Gu January 2012 (has links)
This research contributes to a very active body of literature on the application of Automated Data Collection Systems (ADCS) and openly shared data to public transportation planning. It also addresses the interaction between transit demand and land use patterns, a key component of generating time-varying origin-destination (O-D) matrices at a route level. An origin-destination (O-D) matrix describes the travel demand between two different locations and is indispensable information for most transportation applications, from strategic planning to traffic control and management. A transit passenger's O-D pair at the route level simply indicates the origin and destination stop along the considered route. Observing existing land use types (e.g., residential, commercial, institutional) within the catchment area of each stop can help in identifying existing transit demand at any given time or over time. The proposed research addresses incorporation of an alighting probability matrix (APM) - tabulating the probabilities that a passenger alights at stops downstream of the boarding at a specified stop - into a time-varying O-D estimation process, based on the passenger's trip purpose or activity locations represented by the interactions between transit demand and land use patterns. In order to examine these interactions, this research also uses a much larger dataset that has been automatically collected from various electronic technologies: Automated Fare Collection (AFC) systems and Automated Passenger Counter (APC) systems, in conjunction with other readily available data such as Google's General Transit Feed Specification (GTFS) and parcel-level land use data. The large and highly detailed datasets have the capability of rectifying limitations of manual data collection (e.g., on-board survey) as well as enhancing any existing decision-making tools. This research proposes use of Google's GTFS for a bus stop aggregation model (SAM) based on distance between individual stops, textual similarity, and common service areas. By measuring land use types within a specified service area based on SAM, this research helps in advancing our understanding of transit demand in the vicinity of bus stops. In addition, a systematic matching technique for aggregating stops (SAM) allows us to analyze the symmetry of boarding and alightings, which can observe a considerable passenger flow between specific time periods and symmetry by time period pairs (e.g., between AM and PM peaks) on an individual day. This research explores the potential generation of a time-varying O-D matrix from APC data, in conjunction with integrated land use and transportation models. This research aims at incorporating all valuable information - the time-varying alighting probability matrix (TAPM) that represents on-board passengers' trip purpose - into the O-D estimation process. A practical application is based on APC data on a specific transit route in the Minneapolis - St. Paul metropolitan area. This research can also provide other practical implications. It can help transit agencies and policy makers to develop decision-making tools to support transit planning, using improved databases with transit-related ADCS and parcel-level land use data. As a result, this work not only has direct implications for the design and operation of future urban public transport systems (e.g., more precise bus scheduling, improve service to public transport users), but also for urban planning (e.g., for transit oriented urban development) and travel forecasting.
|
6 |
Lifetime Condition Prediction For BridgesBayrak, Hakan 01 October 2011 (has links) (PDF)
Infrastructure systems are crucial facilities. They supply the necessary transportation, water and energy utilities for the public. However, while aging, these systems gradually deteriorate in time and approach the end of their lifespans. As a result, they require periodic maintenance and repair in order to function and be reliable throughout their lifetimes. Bridge infrastructure is an essential part of the transportation infrastructure. Bridge management systems (BMSs), used to monitor the condition and safety of the bridges in a bridge infrastructure, have evolved considerably in the past decades. The aim of BMSs is to use the resources in an optimal manner keeping the bridges out of risk of failure. The BMSs use the lifetime performance curves to predict the future condition of the bridge elements or bridges. The most widely implemented condition-based performance prediction and maintenance optimization model is the Markov Decision Process-based models (MDP). The importance of the Markov Decision Process-based model is that it defines the time-variant deterioration using the Markov Transition Probability Matrix and performs the lifetime cost optimization by finding the optimum maintenance policy. In this study, the Markov decision process-based model is examined and a computer program to find the optimal policy with discounted life-cycle cost is developed. The other performance prediction model investigated in this study is a probabilistic Bi-linear model which takes into account the uncertainties for the deterioration process and the application of maintenance actions by the use of random variables. As part of the study, in order to further analyze and develop the Bi-linear model, a Latin Hypercube Sampling-based (LHS) simulation program is also developed and integrated into the main computational algorithm which can produce condition, safety, and life-cycle cost profiles for bridge members with and without maintenance actions. Furthermore, a polynomial-based condition prediction is also examined as an alternative performance prediction model. This model is obtained from condition rating data by applying regression analysis. Regression-based performance curves are regenerated using the Latin Hypercube sampling method. Finally, the results from the Markov chain-based performance prediction are compared with Simulation-based Bi-linear prediction and the derivation of the transition probability matrix from simulated regression based condition profile is introduced as a newly developed approach. It has been observed that the results obtained from the Markov chain-based average condition rating profiles match well with those obtained from Simulation-based mean condition rating profiles. The result suggests that the Simulation-based condition prediction model may be considered as a potential model in future BMSs.
|
7 |
Daugiapakopių procesų būsenų modeliavimas / State simulation of multi-stage processesRimkevičiūtė, Inga 14 June 2010 (has links)
Pagrindinis šio darbo tikslas yra sukurti daugiapakopių procesų būsenų modelį, kuriuo būtų galima modeliuoti įvairių galimų bet kokios sistemos trikdžių scenarijus ir atlikti demonstracinius skaičiavimus. Atsiradus sutrikimui ar pažeidimams sutrikdomas kitų sistemoje dalyvaujančių pakopų darbas ir turime tam tikras pasekmes, kurios iššaukia problemas, liečiančias aplinkui funkcionuojančius sektorius. Todėl yra labai svarbu nustatyti daugiapakopių procesų būsenų modelio galimų būsenų scenarijus, išanalizuoti jų tikėtinumą bei dažnumą ir įvertinti. Daugiausiai dėmesio skiriama perėjimo tikimybių iš vienos pakopos būsenų į kitos pakopos būsenas matricų modeliavimui ir skaičiavimo algoritmo kūrimui. Tada atliekame stebėjimą kaip elgiasi trikdžių pasirodymo tikimybės per 100 perėjimų. Tam naudojami Markovo grandinės bei procesai ir tikimybiniai skirstiniai. / The main purpose of this research is to develop multi-stage process states model that could simulate a possible range of any system failures and demonstrational calculations. In the event of disruption or irregularities affects the other systems involved in stage work and we have certain consequences, which triggered concerns about the functioning around the sector. It is very important to establish a multi-stage process states model, the possible states of scenarios, analyze their probability and the frequency and to assess it. Focuses on the transition probabilities between states in the next tier level status matrix modeling and computing algorithm. Then perform the behavior tracking script and the likelihood of interference, the likelihood of the appearance of over 100 transitions. For this purpose, Markov chains and processes, and probabilistic distributions are used.
|
8 |
Improved facies modelling with multivariate spatial statisticsLi, Yupeng Unknown Date
No description available.
|
9 |
Relationships Between Felt Intensity And Recorded Ground Motion Parameters For TurkeyBilal, Mustafa 01 January 2013 (has links) (PDF)
Earthquakes are among natural disasters with significant damage potential / however it is possible to reduce the losses by taking several remedies. Reduction of seismic losses starts with identifying and estimating the expected damage to some accuracy. Since both the design styles and the construction defects exhibit mostly local properties all over the world, damage estimations should be performed at regional levels.
Another important issue in disaster mitigation is to determine a robust measure of ground motion intensity parameters. As of now, well-built correlations between shaking intensity and instrumental ground motion parameters are not yet studied in detail for Turkish data.
In the first part of this thesis, regional empirical Damage Probability Matrices (DPMs) are formed for Turkey. As the input data, the detailed damage database of the 17 August 1999 Kocaeli earthquake (Mw=7.4) is used. The damage probability matrices are derived for Sakarya, Bolu and Kocaeli, for both reinforced concrete and masonry buildings. Results are compared with previous similar studies and the differences are discussed. After validation with future data, these DPMs can be used in the calculation of earthquake insurance premiums.
In the second part of this thesis, two relationships between the felt-intensity and peak ground motion parameters are generated using linear least-squares regression technique. The first one correlates Modified Mercalli Intensity (MMI) to Peak Ground Acceleration (PGA) whereas the latter one does the same for Peak Ground Velocity (PGV). Old damage reports and isoseismal maps are employed for deriving 92 data pairs of MMI, PGA and PGV used in the regression analyses. These local relationships can be used in the future for ShakeMap applications in rapid response and disaster management activities.
|
10 |
修正條件分配勝率矩陣時最佳參考點之選取方法 / The best reference point method for the modification of the conditional distribution odds ratio matrices郭俊佑 Unknown Date (has links)
Chen(2010)提出如何用勝率函數來判斷給定的連續條件分配是否相容,以及
相容時如何求對應的聯合分配。本研究提出,在二維有限的情形下,如何用勝率
矩陣來判斷給定的條件機率矩陣是否相容,以及相容時如何求對應的聯合機率矩
陣。又給定的條件機率矩陣不相容時,我們介紹了四種修改勝率矩陣的方法,同
時在使用幾何平均法調整勝率矩陣的過程中,也發現選取最佳參考點以獲得最佳
近似聯合機率矩陣之方法,並且給予理論證明。最後以模擬的方式發現,在修改
勝率矩陣的四種方法中,以幾何平均法所得到的近似聯合機率矩陣,其條件機率
矩陣最常接近所給定的條件機率矩陣。 / Chen (2010) provides the representations of odds ratio function to examine the compatibility of conditional probability density functions and gives the corresponding
joint probability density functions if they are compatible. In this research, we provide the representations of odds ratio matrix to examine the compatibility of two discrete
conditional probability matrices and give the corresponding joint probability matrix if they are compatible. For incompatible situations, we offer four methods to revise odds ratio matrices to find near joint probability matrices so that their conditional probability matrices are not far from the two given ones. That is, we provide four methods so that the sums of error squares are small. For each method, the sum of error squares may depend on the same reference point of two odds ratio matrices. We first
discover by example that only the geometric method out of these four methods has a pattern to get the best reference point so that the sum of error squares is smallest. We
then prove this finding in general. In addition, through simulation results, the geometric method would provide the smallest sum of error squares most often among these four methods. Hence, we suggest using geometric method. Its strategy to find the best reference point is also given.
|
Page generated in 0.0766 seconds