291 |
Ontology Pattern-Based Data IntegrationKrisnadhi, Adila Alfa January 2015 (has links)
No description available.
|
292 |
ResearchIQ: An End-To-End Semantic Knowledge Platform For Resource Discovery in Biomedical ResearchRaje, Satyajeet 20 December 2012 (has links)
No description available.
|
293 |
Statistical Improvements for Ecological Learning about Spatial ProcessesDupont, Gaetan L 20 October 2021 (has links) (PDF)
Ecological inquiry is rooted fundamentally in understanding population abundance, both to develop theory and improve conservation outcomes. Despite this importance, estimating abundance is difficult due to the imperfect detection of individuals in a sample population. Further, accounting for space can provide more biologically realistic inference, shifting the focus from abundance to density and encouraging the exploration of spatial processes. To address these challenges, Spatial Capture-Recapture (“SCR”) has emerged as the most prominent method for estimating density reliably. The SCR model is conceptually straightforward: it combines a spatial model of detection with a point process model of the spatial distribution of individuals, using data collected on individuals within a spatially referenced sampling design. These data are often coarse in spatial and temporal resolution, though, motivating research into improving the quality of the data available for analysis. Here I explore two related approaches to improve inference from SCR: sampling design and data integration. Chapter 1 describes the context of this thesis in more detail. Chapter 2 presents a framework to improve sampling design for SCR through the development of an algorithmic optimization approach. Compared to pre-existing recommendations, these optimized designs perform just as well but with far more flexibility to account for available resources and challenging sampling scenarios. Chapter 3 presents one of the first methods of integrating an explicit movement model into the SCR model using telemetry data, which provides information at a much finer spatial scale. The integrated model shows significant improvements over the standard model to achieve a specific inferential objective, in this case: the estimation of landscape connectivity. In Chapter 4, I close by providing two broader conclusions about developing statistical methods for ecological inference. First, simulation-based evaluation is integral to this process, but the circularity of its use can, unfortunately, be understated. Second, and often underappreciated: statistical solutions should be as intuitive as possible to facilitate their adoption by a diverse pool of potential users. These novel approaches to sampling design and data integration represent essential steps in advancing SCR and offer intuitive opportunities to advance ecological learning about spatial processes.
|
294 |
Machine Learning Demand Forecast for Demand Sensing and Shaping : Combine the existing work done with demand sensing and shaping to achieve a higher customer service level, customer experience and balancing inventoryBernabeu Fernandez De Liencres, Damian January 2024 (has links)
Detta examensarbete undersöker användningen av datadrivna metoder för efterfrågan prognoser och lagerstyrning inom ramen för Ericssons supply chain management. Studien fokuserar på integrationen av maskininlärning, demand shaping och realtidsdata för att förbättra noggrannheten och effektiviteten inom dessa avgörande områden. Studien utforskar effekten av maskininlärningstekniker på efterfråganprognoser och betonar betydelsen av exakta förutsägelser för att vägleda produktion, lagerhantering och distributionsstrategier. För att implementera detta föreslår studien integrationen av realtidsdataströmmar och Internet of Things (IoT)-enheter, vilket möjliggör insamling av aktuell information. Denna integration underlättar snabba svar på varierande efterfrågemönster och optimerar därmed supply chain-operationer. Studien ger värdefulla insikter för Ericsson för att förbättra sina förmågor inom efterfråganprognoser och för att optimera lagerhanteringen i en datadriven miljö. / This master's thesis investigates the utilization of data-driven approaches for demand forecasting and inventory control in the context of Ericsson's supply chain management. The study focuses on the integration of machine learning, demand shaping, and real-time data to enhance accuracy and efficiency in these critical areas. The research explores the impact of machine learning techniques on demand forecasting, highlighting the significance of precise predictions in guiding production, inventory management, and distribution strategies. To address this, the study proposes the integration of real-time data streams and Internet of Things (IoT) devices, enabling the capture of up-to-date information. This integration facilitates prompt responses to evolving demand patterns, thereby optimizing supply chain operations.The research provides valuable insights for Ericsson to enhance its demand forecasting capabilities and optimize inventory management in a data-driven environment.
|
295 |
Organisation et exploitation des connaissances sur les réseaux d'intéractions biomoléculaires pour l'étude de l'étiologie des maladies génétiques et la caractérisation des effets secondaires de principes actifs / Organization and exploitation of biological molecular networks for studying the etiology of genetic diseases and for characterizing drug side effectsBresso, Emmanuel 25 September 2013 (has links)
La compréhension des pathologies humaines et du mode d'action des médicaments passe par la prise en compte des réseaux d'interactions entre biomolécules. Les recherches récentes sur les systèmes biologiques produisent de plus en plus de données sur ces réseaux qui gouvernent les processus cellulaires. L'hétérogénéité et la multiplicité de ces données rendent difficile leur intégration dans les raisonnements des utilisateurs. Je propose ici des approches intégratives mettant en oeuvre des techniques de gestion de données, de visualisation de graphes et de fouille de données, pour tenter de répondre au problème de l'exploitation insuffisante des données sur les réseaux dans la compréhension des phénotypes associés aux maladies génétiques ou des effets secondaires des médicaments. La gestion des données sur les protéines et leurs propriétés est assurée par un système d'entrepôt de données générique, NetworkDB, personnalisable et actualisable de façon semi-automatique. Des techniques de visualisation de graphes ont été couplées à NetworkDB pour utiliser les données sur les réseaux biologiques dans l'étude de l'étiologie des maladies génétiques entrainant une déficience intellectuelle. Des sous-réseaux de gènes impliqués ont ainsi pu être identifiés et caractérisés. Des profils combinant des effets secondaires partagés par les mêmes médicaments ont été extraits de NetworkDB puis caractérisés en appliquant une méthode de fouille de données relationnelles couplée à Network DB. Les résultats permettent de décrire quelles propriétés des médicaments et de leurs cibles (incluant l'appartenance à des réseaux biologiques) sont associées à tel ou tel profil d'effets secondaires / The understanding of human diseases and drug mechanisms requires today to take into account molecular interaction networks. Recent studies on biological systems are producing increasing amounts of data. However, complexity and heterogeneity of these datasets make it difficult to exploit them for understanding atypical phenotypes or drug side-effects. This thesis presents two knowledge-based integrative approaches that combine data management, graph visualization and data mining techniques in order to improve our understanding of phenotypes associated with genetic diseases or drug side-effects. Data management relies on a generic data warehouse, NetworkDB, that integrates data on proteins and their properties. Customization of the NetworkDB model and regular updates are semi-automatic. Graph visualization techniques have been coupled with NetworkDB. This approach has facilitated access to biological network data in order to study genetic disease etiology, including X-linked intellectual disability (XLID). Meaningful sub-networks of genes have thus been identified and characterized. Drug side-effect profiles have been extracted from NetworkDB and subsequently characterized by a relational learning procedure coupled with NetworkDB. The resulting rules indicate which properties of drugs and their targets (including networks) preferentially associate with a particular side-effect profile
|
296 |
Bioinformatic analyses for T helper cell subtypes discrimination and gene regulatory network reconstructionKröger, Stefan 02 August 2017 (has links)
Die Etablierung von Hochdurchsatz-Technologien zur Durchführung von Genexpressionsmessungen führte in den letzten 20 Jahren zu einer stetig wachsende Menge an verfügbaren Daten. Sie ermöglichen durch Kombination einzelner Experimente neue Vergleichsstudien zu kombinieren oder Experimente aus verschiedenen Studien zu großen Datensätzen zu vereinen. Dieses Vorgehen wird als Meta-Analyse bezeichnet und in dieser Arbeit verwendet, um einen großen Genexpressionsdatensatz aus öffentlich zugänglichen T-Zell Experimenten zu erstellen. T-Zellen sind Immunzellen, die eine Vielzahl von unterschiedlichen Funktionen des Immunsystems inititiieren und steuern. Sie können in verschiedene Subtypen mit unterschiedlichen Funktionen differenzieren.
Der mittels Meta-Analyse erstellte Datensatz beinhaltet nur Experimente zu einem T-Zell-Subtyp, den regulatorischen T-Zellen (Treg) bzw. der beiden Untergruppen, natürliche Treg (nTreg) und induzierte Treg (iTreg) Zellen. Eine bisher unbeantwortete Frage lautet, welche subtyp-spezifischen gen-regulatorische Mechanismen die T-Zell Differenzierung steuern. Dazu werden in dieser Arbeit zwei spezifische Herausforderungen der Treg Forschung behandelt: (i) die Identifikation von Zelloberflächenmarkern zur Unterscheidung und Charakterisierung der Subtypen, sowie (ii) die Rekonstruktion von Treg-Zell-spezifischen gen-regulatorischen Netzwerken (GRN), die die Differenzierungsmechanismen beschreiben. Die implementierte Meta-Analyse kombiniert mehr als 150 Microarray-Experimente aus über 30 Studien in einem Datensatz. Dieser wird benutzt, um mittels Machine Learning Zell-spezifische Oberflächenmarker an Hand ihres Expressionsprofils zu identifizieren. Mit der in dieser Arbeit entwickelten Methode wurden 41 Genen extrahiert, von denen sechs Oberflächenmarker sind. Zusätzliche Validierungsexperimente zeigten, dass diese sechs Gene die Experimenten beider T-Zell Subtypen sicher unterscheiden können. Zur Rekonstruktion von GRNs vergleichen wir unter Verwendung des erstellten Datensatzes 11 verschiedene Algorithmen und evaluieren die Ergebnisse mit Informationen aus Interaktionsdatenbanken. Die Evaluierung zeigt, dass die derzeit verfügbaren Methoden nicht in der Lage sind den Wissensstand Treg-spezifischer, regulatorsicher Mechanismen zu erweitern. Abschließend präsentieren wir eine Datenintegrationstrategie zur Rekonstruktion von GRN am Beispiel von Th2 Zellen. Aus Hochdurchsatzexperimenten wird ein Th2-spezifisches GRN bestehend aus 100 Genen rekonstruiert. Während 89 dieser Gene im Kontext der Th2-Zelldifferenzierung bekannt sind, wurden 11 neue Kandidatengene ohne bisherige Assoziation zur Th2-Differenzierung ermittelt. Die Ergebnisse zeigen, dass Datenintegration prinzipiell die GRN Rekonstruktion ermöglicht. Mit der Verfügbarkeit von mehr Daten mit besserer Qualität ist zu erwarten, dass Methoden zur Rekonstruktion maßgeblich zum besseren Verstehen der zellulären Differenzierung im Immunsystem und darüber hinaus beitragen können und so letztlich die Ursachenforschung von Dysfunktionen und Krankheiten des Immunsystems ermöglichen werden. / Within the last two decades high-throughput gene expression screening technologies have led to a rapid accumulation of experimental data. The amounts of information available have enabled researchers to contrast and combine multiple experiments by synthesis, one of such approaches is called meta-analysis. In this thesis, we build a large gene expression data set based on publicly available studies for further research on T cell subtype discrimination and the reconstruction of T cell specific gene regulatory events.
T cells are immune cells which have the ability to differentiate into subtypes with distinct functions, initiating and contributing to a variety of immune processes. To date, an unsolved problem in understanding the immune system is how T cells obtain a specific subtype differentiation program, which relates to subtype-specific gene regulatory mechanisms. We present an assembled expression data set which describes a specific T cell subset, regulatory T (Treg) cells, which can be further categorized into natural Treg (nTreg) and induced Treg (iTreg) cells. In our analysis we have addressed specific challenges in regulatory T cell research: (i) discriminating between different Treg cell subtypes for characterization and functional analysis, and (ii) reconstructing T cell subtype specific gene regulatory mechanisms which determine the differences in subtype-specific roles for the immune system. Our meta-analysis strategy combines more than one hundred microarray experiments. This data set is applied to a machine learning based strategy of extracting surface protein markers to enable Treg cell subtype discrimination.
We identified a set of 41 genes which distinguish between nTregs and iTregs based on gene expression profile only. Evaluation of six of these genes confirmed their discriminative power which indicates that our approach is suitable to extract candidates for robust discrimination between experiment classes. Next, we identify gene regulatory interactions using existing reconstruction algorithms aiming to extend the number of known gene-gene interactions for Treg cells. We applied eleven GRN reconstruction tools based on expression data only and compared their performance. Taken together, our results suggest that the available methods are not yet sufficient to extend the current knowledge by inferring so far unreported Treg specific interactions. Finally, we present an approach of integrating multiple data sets based on different high-throughput technologies to reconstruct a subtype-specific GRN. We constructed a Th2 cell specific gene regulatory network of 100 genes. While 89 of these are known to be related to Th2 cell differentiation, we were able to attribute 11 new candidate genes with a function in Th2 cell differentiation. We show that our approach to data integration does, in principle, allow for the reconstruction of a complex network. Future availability of more and more consistent data may enable the use of the concept of GRN reconstruction to improve understanding causes and mechanisms of cellular differentiation in the immune system and beyond and, ultimately, their dysfunctions and diseases.
|
297 |
Data Governance : A conceptual framework in order to prevent your Data Lake from becoming a Data SwampPaschalidi, Charikleia January 2015 (has links)
Information Security nowadays is becoming a very popular subject of discussion among both academics and organizations. Proper Data Governance is the first step to an effective Information Security policy. As a consequence, more and more organizations are now switching their approach to data, considering them as assets, in order to get as much value as possible out of it. Living in an IT-driven world makes a lot of researchers to approach Data Governance by borrowing IT Governance frameworks.The aim of this thesis is to contribute to this research by doing an Action Research in a big Financial Institution in the Netherlands that is currently releasing a Data Lake where all the data will be gathered and stored in a secure way. During this research a framework on implementing a proper Data Governance into the Data Lake is introduced.The results were promising and indicate that under specific circumstances, this framework could be very beneficial not only for this specific institution, but for every organisation that would like to avoid confusions and apply Data Governance into their tasks. / <p>Validerat; 20151222 (global_studentproject_submitter)</p>
|
298 |
A Study on Machine Learning Techniques for the Schema Matching Networks Problem / Um Estudo de Técnicas de Aprendizagem de Máquina para o Problema de Casamento de Esquemas em RedeRodrigues, Diego de Azevedo, 981997982 22 October 2018 (has links)
Submitted by Diego Rodrigues (diego.rodrigues@icomp.ufam.edu.br) on 2018-12-07T21:38:02Z
No. of bitstreams: 2
Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-12-07T22:27:06Z (GMT) No. of bitstreams: 2
Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-12-10T19:02:56Z (GMT) No. of bitstreams: 2
Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-12-10T19:02:56Z (GMT). No. of bitstreams: 2
Diego Rodrigues.pdf: 3673641 bytes, checksum: f1fdd4162dc6acd590136bb6b886704e (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2018-10-22 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Schema Matching is the problem of finding semantic correspondences between elements from different schemas. This is a challenging problem, since the same concept is often represented by disparate elements in the schemas. The traditional instances of this problem involved a pair of schemas to be matched. However, recently there has been a increasing interest in matching several related schemas at once, a problem known as Schema Matching Networks, where the goal is to identify elements from several schemas that correspond to a single concept. We propose a family of methods for schema matching networks based on machine learning, which proved to be a competitive alternative for the traditional matching problem in several domains. To overcome the issue of requiring a large amount of training data, we also propose a bootstrapping procedure to automatically generate training data. In addition, we leverage constraints that arise in network scenarios to improve the quality of this data. We also propose a strategy for receiving user feedback to assert some of the matchings generated, and, relying on this feedback, improving the quality of the final result. Our experiments show that our methods can outperform baselines reaching F1-score up to 0.83. / Casamento de Esquemas é a tarefa de encontrar correpondências entre elementos de diferentes esquemas de bancos de dados. É um problema desafiador, uma vez que o mesmo conceito geralmente é representado de maneiras distintas nos esquemas.Tradicionalmente, a tarefa envolve um par de esquemas a serem mapeados. Entretanto, houve um crescimento na necessidade de mapear vários esquemas ao mesmo tempo, tarefa conhecida como Casamento de Esquemas em Rede, onde o objetivo é identificar elementos de vários esquemas que correspondem ao mesmo conceito. Este trabalho propõe uma famı́lia de métodos para o problema do casamento de esquemas em rede baseados em aprendizagem de máquina, que provou ser uma alternativa viável para o problema do casamento tradicional em diversos domı́nios. Para superar obstáculo de obter bastantes instâncias de treino, também é proposta uma técnica de bootstrapping para gerar treino automático. Além disso, o trabalho considera restrições de integridade que ajudam a nortear
o processo de casamento em rede. Este trabalho também propõe uma estratégia para receber avaliações do usuário, com o propósito de melhorar o resultado final. Experimentos mostram que o método proposto supera outros métodos comparados alcançando valor F1 até 0.83 e sem utilizar muitas avaliações do usuário.
|
299 |
Qualitative Distances and Qualitative Description of Images for Indoor Scene Description and Recognition in RoboticsFalomir Llansola, Zoe 28 November 2011 (has links)
The automatic extraction of knowledge from the world by a robotic system as human beings interpret their environment through their senses is still an unsolved task in Artificial Intelligence. A robotic agent is in contact with the world through its sensors and other electronic components which obtain and process mainly numerical information. Sonar, infrared and laser sensors obtain distance information. Webcams obtain digital images that are represented internally as matrices of red, blue and green (RGB) colour coordinate values. All this numerical values obtained from the environment need a later interpretation in order to provide the knowledge required by the robotic agent in order to carry out a task.
Similarly, light wavelengths with specific amplitude are captured by cone cells of human eyes obtaining also stimulus without meaning. However, the information that human beings can describe and remember from what they see is expressed using words, that is qualitatively.
The exact process carried out after our eyes perceive light wavelengths and our brain interpret them is quite unknown. However, a real fact in human cognition is that people go beyond the purely perceptual experience to classify things as members of categories and attach linguistic labels to them.
As the information provided by all the electronic components incorporated in a robotic agent is numerical, the approaches that first appeared in the literature giving an interpretation of this information followed a mathematical trend. In this thesis, this problem is addressed from the other side, its main aim is to process these numerical data in order to obtain qualitative information as human beings can do.
The research work done in this thesis tries to narrow the gap between the acquisition of low level information by robot sensors and the need of obtaining high level or qualitative information for enhancing human-machine communication and for applying logical reasoning processes based on concepts. Moreover, qualitative concepts can be added a meaning by relating them to others. They can be used for reasoning applying qualitative models that have been developed in the last twenty years for describing and interpreting metrical and mathematical concepts such as orientation, distance, velocity, acceleration, and so on. And they can be also understood by human-users both written and read aloud.
The first contributions presented are the definition of a method for obtaining fuzzy distance patterns (which include qualitative distances such as ‘near’, far’, ‘very far’ and so on) from the data obtained by any kind of distance sensors incorporated in a mobile robot and the definition of a factor to measure the dissimilarity between those fuzzy patterns. Both have been applied to the integration of the distances obtained by the sonar and laser distance sensors incorporated in a Pioneer 2 dx mobile robot and, as a result, special obstacles have been detected as ‘glass window’, ‘mirror’, and so on. Moreover, the fuzzy distance patterns provided have been also defuzzified in order to obtain a smooth robot speed and used to classify orientation reference systems into ‘open’ (it defines an open space to be explored) or ‘closed’.
The second contribution presented is the definition of a model for qualitative image description (QID) by applying the new defined models for qualitative shape and colour description and the topology model by Egenhofer and Al-Taha [1992] and the orientation models by Hernández [1991] and Freksa [1992]. This model can qualitatively describe any kind of digital image and is independent of the image segmentation method used. The QID model have been tested in two scenarios in robotics: (i) the description of digital images captured by the camera of a Pioneer 2 dx mobile robot and (ii) the description of digital images of tile mosaics taken by an industrial camera located on a platform used by a robot arm to assemble tile mosaics.
In order to provide a formal and explicit meaning to the qualitative description of the images generated, a Description Logic (DL) based ontology has been designed and presented as the third contribution. Our approach can automatically process any random image and obtain a set of DL-axioms that describe it visually and spatially. And objects included in the images are classified according to the ontology schema using a DL reasoner. Tests have been carried out using digital images captured by a webcam incorporated in a Pioneer 2 dx mobile robot. The images taken correspond to the corridors of a building at University Jaume I and objects with them have been classified into ‘walls’, ‘floor’, ‘office doors’ and ‘fire extinguishers’ under different illumination conditions and from different observer viewpoints.
The final contribution is the definition of a similarity measure between qualitative descriptions of shape, colour, topology and orientation. And the integration of those measures into the definition of a general similarity measure between two qualitative descriptions of images. These similarity measures have been applied to: (i) extract objects with similar shapes from the MPEG7 CE Shape-1 library; (ii) assemble tile mosaics by qualitative shape and colour similarity matching; (iii) compare images of tile compositions; and (iv) compare images of natural landmarks in a mobile robot world for their recognition.
The contributions made in this thesis are only a small step forward in the direction of enhancing robot knowledge acquisition from the world. And it is also written with the aim of inspiring others in their research, so that bigger contributions can be achieved in the future which can improve the life quality of our society.
|
300 |
Development of Wastewater Collection Network Asset Database, Deterioration Models and Management FrameworkYounis, Rizwan January 2010 (has links)
The dynamics around managing urban infrastructure are changing dramatically. Today’s infrastructure management challenges – in the wake of shrinking coffers and stricter stakeholders’ requirements – include finding better condition assessment tools and prediction models, and effective and intelligent use of hard-earn data to ensure the sustainability of urban infrastructure systems. Wastewater collection networks – an important and critical component of urban infrastructure – have been neglected, and as a result, municipalities in North America and other parts of the world have accrued significant liabilities and infrastructure deficits. To reduce cost of ownership, to cope with heighten accountability, and to provide reliable and sustainable service, these systems need to be managed in an effective and intelligent manner.
The overall objective of this research is to present a new strategic management framework and related tools to support multi-perspective maintenance, rehabilitation and replacement (M, R&R) planning for wastewater collection networks. The principal objectives of this research include:
(1) Developing a comprehensive wastewater collection network asset database consisting of high quality condition assessment data to support the work presented in this thesis, as well as, the future research in this area.
(2) Proposing a framework and related system to aggregate heterogeneous data from municipal wastewater collection networks to develop better understanding of their historical and future performance.
(3) Developing statistical models to understand the deterioration of wastewater pipelines.
(4) To investigate how strategic management principles and theories can be applied to effectively manage wastewater collection networks, and propose a new management framework and related system.
(5) Demonstrating the application of strategic management framework and economic principles along with the proposed deterioration model to develop long-term financial sustainability plans for wastewater collection networks.
A relational database application, WatBAMS (Waterloo Buried Asset Management System), consisting of high quality data from the City of Niagara Falls wastewater collection system is developed. The wastewater pipelines’ inspections were completed using a relatively new Side Scanner and Evaluation Technology camera that has advantages over the traditional Closed Circuit Television cameras. Appropriate quality assurance and quality control procedures were developed and adopted to capture, store and analyze the condition assessment data. To aggregate heterogeneous data from municipal wastewater collection systems, a data integration framework based on data warehousing approach is proposed. A prototype application, BAMS (Buried Asset Management System), based on XML technologies and specifications shows implementation of the proposed framework. Using wastewater pipelines condition assessment data from the City of Niagara Falls wastewater collection network, the limitations of ordinary and binary logistic regression methodologies for deterioration modeling of wastewater pipelines are demonstrated. Two new empirical models based on ordinal regression modeling technique are proposed. A new multi-perspective – that is, operational/technical, social/political, regulatory, and finance – strategic management framework based on modified balanced-scorecard model is developed. The proposed framework is based on the findings of the first Canadian National Asset Management workshop held in Hamilton, Ontario in 2007. The application of balanced-scorecard model along with additional management tools, such as strategy maps, dashboard reports and business intelligence applications, is presented using data from the City of Niagara Falls. Using economic principles and example management scenarios, application of Monte Carlo simulation technique along with the proposed deterioration model is presented to forecast financial requirements for long-term M, R&R plans for wastewater collection networks.
A myriad of asset management systems and frameworks were found for transportation infrastructure. However, to date few efforts have been concentrated on understanding the performance behaviour of wastewater collection systems, and developing effective and intelligent M, R&R strategies. Incomplete inventories, and scarcity and poor quality of existing datasets on wastewater collection systems were found to be critical and limiting issues in conducting research in this field. It was found that the existing deterioration models either violated model assumptions or assumptions could not be verified due to limited and questionable quality data. The degradation of Reinforced Concrete pipes was found to be affected by age, whereas, for Vitrified Clay pipes, the degradation was not age dependent. The results of financial simulation model show that the City of Niagara Falls can save millions of dollars, in the long-term, by following a pro-active M, R&R strategy.
The work presented in this thesis provides an insight into how an effective and intelligent management system can be developed for wastewater collection networks. The proposed framework and related system will lead to the sustainability of wastewater collection networks and assist municipal public works departments to proactively manage their wastewater collection networks.
|
Page generated in 0.0407 seconds