• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 10
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Influence of coarse woody debris dams on leaf litter dynamics in U.K.headwater streams

Bold, Richard P. January 2002 (has links)
No description available.
2

Scalable Validation of Data Streams

Xu, Cheng January 2016 (has links)
In manufacturing industries, sensors are often installed on industrial equipment generating high volumes of data in real-time. For shortening the machine downtime and reducing maintenance costs, it is critical to analyze efficiently this kind of streams in order to detect abnormal behavior of equipment. For validating data streams to detect anomalies, a data stream management system called SVALI is developed. Based on requirements by the application domain, different stream window semantics are explored and an extensible set of window forming functions are implemented, where dynamic registration of window aggregations allow incremental evaluation of aggregate functions over windows. To facilitate stream validation on a high level, the system provides two second order system validation functions, model-and-validate and learn-and-validate. Model-and-validate allows the user to define mathematical models based on physical properties of the monitored equipment, while learn-and-validate builds statistical models by sampling the stream in real-time as it flows. To validate geographically distributed equipment with short response time, SVALI is a distributed system where many SVALI instances can be started and run in parallel on-board the equipment. Central analyses are made at a monitoring center where streams of detected anomalies are combined and analyzed on a cluster computer. SVALI is an extensible system where functions can be implemented using external libraries written in C, Java, and Python without any modifications of the original code. The system and the developed functionality have been applied on several applications, both industrial and for sports analytics.
3

Integrating Visual Data Flow Programming with Data Stream Management

Melander, Lars January 2016 (has links)
Data stream management and data flow programming have many things in common. In both cases one wants to transfer possibly infinite sequences of data items from one place to another, while performing transformations to the data. This Thesis focuses on the integration of a visual programming language with a data stream management system (DSMS) to support the construction, configuration, and visualization of data stream applications. In the approach, analyses of data streams are expressed as continuous queries (CQs) that emit data in real-time. The LabVIEW visual programming platform has been adapted to support easy specification of continuous visualization of CQ results. LabVIEW has been integrated with the DSMS SVALI through a stream-oriented client-server API. Query programming is declarative, and it is desirable to make the stream visualization declarative as well, in order to raise the abstraction level and make programming more intuitive. This has been achieved by adding a set of visual data flow components (VDFCs) to LabVIEW, based on the LabVIEW actor framework. With actor-based data flows, visualization of data stream output becomes more manageable, avoiding the procedural control structures used in conventional LabVIEW programming while still utilizing the comprehensive, built-in LabVIEW visualization tools. The VDFCs are part of the Visual Data stream Monitor (VisDM), which is a client-server based platform for handling real-time data stream applications and visualizing stream output. VDFCs are based on a data flow framework that is constructed from the actor framework, and are divided into producers, operators, consumers, and controls. They allow a user to set up the interface environment, customize the visualization, and convert the streaming data to a format suitable for visualization. Furthermore, it is shown how LabVIEW can be used to graphically define interfaces to data streams and dynamically load them in SVALI through a general wrapper handler. As an illustration, an interface has been defined in LabVIEW for accessing data streams from a digital 3D antenna. VisDM has successfully been tested in two real-world applications, one at Sandvik Coromant and one at the Ångström Laboratory, Uppsala University. For the first case, VisDM was deployed as a portable system to provide direct visualization of machining data streams. The data streams can differ in many ways as do the various visualization tasks. For the second case, data streams are homogenous, high-rate, and query operations are much more computation-demanding. For both applications, data is visualized in real-time, and VisDM is capable of sufficiently high update frequencies for processing and visualizing the streaming data without obstructions. The uniqueness of VisDM is the combination of a powerful and versatile DSMS with visually programmed and completely customizable visualization, while maintaining the complete extensibility of both.
4

Platforms for Real-time Moving Object Location Stream Processing

Gadhoumi, Shérazade January 2017 (has links)
Boarder security is usually based on observing and analyzing the movement of MovingPoint Objects (MPOs): vehicle, boats, pedestrian or aircraft for example. This movementanalysis can directly be made by an operator observing the MPOs in real-time, but theprocess is time-consuming and approximate. This is why the states of each MPO (ID, location,speed, direction) are sensed in real-time using Global Navigation Satellite System(GNSS), Automatic Identification System (AIS) and radar sensing, thus creating a streamof MPO states. This research work proposes and carries out (1) a method for detectingfour different moving point patterns based on this input stream (2) a comparison betweenthree possible implementations of the moving point pattern detectors based on three differentData Stream Management Systems (DSMS). Moving point patterns can be dividedin two groups: (1) individual location patterns are based on the analysis of the successivestates of one MPO, (2) set-based relative motion patterns are based on the analysis ofthe relative motion of groups of MPOs within a set. This research focuses on detectingfour moving point patterns: (1) the geofence pattern consists of one MPO enteringor exiting one of the predefined areas called geofences, (2) the track pattern consists ofone MPO following the same direction for a given number of time steps and satisfying agiven spatial constraint, (3) the flock pattern consists of a group of geographically closeMPOs following the same direction, (4) the leadership pattern consists of a track patternwith the corresponding MPO anticipating the direction of geographically close MPOs atthe last time step. The two first patterns are individual location patterns, while the othersare set-based relative motion patterns. This research work proposes a method for detectinggeofence patterns based on the update of a table storing the last sensed state of eachMPO. The approach used for detecting track, flock and leadership patterns is based on theupdate of a REMO matrix (RElative MOtion matrix) where rows correspond to MPOs,columns to time steps and cells record the direction of movement. For the detection offlock patterns a simple but effective probabilistic grid-based approach is proposed in orderto detect clusters of MPOs within the MPOs following the same direction: (1) the Filteringphase partitions the study area into square-shaped cells -according to the dimensionof the spatial constraint- and selects spatially contiguous grid cells called candidate areasthat potentially contain flock patterns (2) for each candidate area, the Refinement phasegenerates disks of the size of the spatial constraint within the selected area until one diskcontains enough MPOs, so that the corresponding MPOs are considered to build a flockpattern. The pattern detectors are implemented on three DSMSs presenting differentcharacteristics: Esri ArcGIS GeoEvent Extension for Server (GeoEvent Ext.), a workflow-based technology that ingests each MPO state separately, Apache Spark Streaming(Spark), a MapReduce-based technology that processes the input stream in batches in ahighly-parallel processing framework and Apache Flink (Flink), a hybrid technology thatingests the states separately but offers several MapReduce semantics. GeoEvent Ext. onlylends itself for a nature implementation of the geofence detector, while the other DSMSsaccommodate the implementation of all detectors. Therefore, the geofence, track, flockand leadership pattern detectors are implemented on Spark and Flink, and empiricallyevaluated in terms of scalability in time/space based on the variation of parameters characterizingthe patterns and/or the input stream. The results of the empirical evaluationshows that the implementation on Flink uses globally less computer resources than theone on Spark. Moreover, the program based on Flink is less sensitive to the variability ofparameters describing either the input stream or the patterns to be detected.
5

Gerenciamento do Fluxo de Valor para implementa??o de fluxo Lean em processos administrativos: aplica??o em uma empresa do setor automotivo

RAMOS, Rodrigo da Cruz 29 April 2016 (has links)
Submitted by Jorge Silva (jorgelmsilva@ufrrj.br) on 2017-05-26T19:13:24Z No. of bitstreams: 1 2016 - Rodrigo da Cruz Ramos.pdf: 2018893 bytes, checksum: 08eae5e64c7e9bff8fd286f11da0a484 (MD5) / Made available in DSpace on 2017-05-26T19:13:24Z (GMT). No. of bitstreams: 1 2016 - Rodrigo da Cruz Ramos.pdf: 2018893 bytes, checksum: 08eae5e64c7e9bff8fd286f11da0a484 (MD5) Previous issue date: 2016-04-29 / The economic situation in Brazil, and the demand fall in the automotive industry generated a strong increase in competitiveness in the sector, forcing companies to become increasingly lean. In this sense, companies are investing in the application of the concepts of Lean Manufacturing. However, to achieve excellence, companies need to expand these concepts to the administrative environment. From this context, arises the question of how Lean practices can contribute to improve the performance of administrative processes. In administrative activities, the application Lean Manufacturing tools and principles is known as Lean Office. The objective of this research is to present the contribution of the Lean Office concepts to reduce the lead time of administrative process of quoting and issuing purchase orders of custom vehicles. For this purpose, the theoretical referencial is based on Lean principles, in the methodological approach of Value Stream Management proposed by Tapping and Shuker in order to facilitate the implementation of Lean Office, and in the Laureau classification of the waste found in the administrative environment. The methodology used in this work was the method of action research, the data collection instruments used were participant observation, document research and field research through semi-structured interviews. The participation of the responsible for the activities related to the processes surveyed allowed to map the current status, identify problems and waste of the process, design the future state, develop and implement an action plan to solve the problems identified. The main results are the reduction of waste around 80% and the decrease the lead time of process around 80%, providing the an efficient gain in the use of resources and increasing the company's competitiveness, demonstrating the effectiveness. / A conjuntura econ?mica do pa?s e a queda de demanda da ind?stria automobil?stica geraram um forte aumento da competitividade no setor, obrigando as empresas a se tornarem cada vez mais enxutas. Nesse sentido, as empresas est?o intensificando a aplica??o dos conceitos do Lean Manufacturing. Entretanto, para alcan?ar a excel?ncia, ? necess?rio expandir estes conceitos para o ambiente administrativo. A partir desse contexto, surge o questionamento de como as pr?ticas Lean podem contribuir para melhorar o desempenho de processos administrativos. Nas atividades administrativas, a aplica??o dos princ?pios e ferramentas oriundos do Lean Manufacturing ? conhecida como Lean Office (escrit?rio enxuto). O objetivo desta pesquisa ? apresentar a contribui??o dos conceitos do Lean Office para reduzir o lead time dos servi?os administrativos de cota??o e emiss?o de pedidos de compras de ve?culos customizados em uma montadora. Para tal, o referencial te?rico baseia-se nos princ?pios do Lean, na abordagem metodol?gica de Gerenciamento de Fluxo de Valor (Value Stream Management) proposta por Tapping e Shuker para facilitar a implementa??o do Lean Office, e na classifica??o de Laureau dos desperd?cios encontrados no ambiente administrativo. A metodologia aplicada neste trabalho foi o m?todo de pesquisa-a??o, utilizando-se, como instrumentos de coleta de dados, a observa??o participante, a pesquisa documental, e a pesquisa de campo por meio de entrevistas semi-estruturadas. A participa??o dos respons?veis pelas atividades relacionadas aos processos pesquisados permitiu mapear o estado atual, identificar os problemas e desperd?cios do processo, desenhar o estado futuro, desenvolver e implementar um plano de a??es para solucionar os problemas identificados. Os principais resultados obtidos s?o a redu??o de desperd?cios em torno de 80% e diminui??o do lead time do processo em torno de 77%, proporcionando a utiliza??o mais eficiente de recursos e o aumento da competitividade da empresa.
6

Dynamic Energy-Aware Database Storage and Operations

Behzadnia, Peyman 29 March 2018 (has links)
Energy consumption has become a first-class optimization goal in design and implementation of data-intensive computing systems. This is particularly true in the design of database management systems (DBMS), which is one of the most important servers in software stack of modern data centers. Data storage system is one of the essential components of database and has been under many research efforts aiming at reducing its energy consumption. In previous work, dynamic power management (DPM) techniques that make real-time decisions to transition the disks to low-power modes are normally used to save energy in storage systems. In this research, we tackle the limitations of DPM proposals in previous contributions and design a dynamic energy-aware disk storage system in database servers. We introduce a DPM optimization model integrated with model predictive control (MPC) strategy to minimize power consumption of the disk-based storage system while satisfying given performance requirements. It dynamically determines the state of disks and plans for inter-disk data fragment migration to achieve desirable balance between power consumption and query response time. Furthermore, via analyzing our optimization model to identify structural properties of optimal solutions, a fast-solution heuristic DPM algorithm is proposed that can be integrated in large-scale disk storage systems, where finding the most optimal solution might be long, to achieve near-optimal power saving solution within short periods of computational time. The proposed ideas are evaluated through running simulations using extensive set of synthetic workloads. The results show that our solution achieves up to 1.65 times more energy saving while providing up to 1.67 times shorter response time compared to the best existing algorithm in literature. Stream join is a dynamic and expensive database operation that performs join operation in real-time fashion on continuous data streams. Stream joins, also known as window joins, impose high computational time and potentially higher energy consumption compared to other database operations, and thus we also tackle energy-efficiency of stream join processing in this research. Given that there is a strong linear correlation between energy-efficiency and performance of in-memory parallel join algorithms in database servers, we study parallelization of stream join algorithms on multicore processors to achieve energy efficiency and high performance. Equi-join is the most frequent type of join in query workloads and symmetric hash join (SHJ) algorithm is the most effective algorithm to evaluate equi-joins in data streams. To best of our knowledge, we are the first to propose a shared-memory parallel symmetric hash join algorithm on multi-core CPUs. Furthermore, we introduce a novel parallel hash-based stream join algorithm called chunk-based pairing hash join that aims at elevating data throughput and scalability. We also tackle parallel processing of multi-way stream joins where there are more than two input data streams involved in the join operation. To best of our knowledge, we are also the first to propose an in-memory parallel multi-way hash-based stream join on multicore processors. Experimental evaluation on our proposed parallel algorithms demonstrates high throughput, significant scalability, and low latency while reducing the energy consumption. Our parallel symmetric hash join and chunk-based pairing hash join achieve up to 11 times and 12.5 times more throughput, respectively, compared to that of state-of-the-art parallel stream join algorithm. Also, these two algorithms provide up to around 22 times and 24.5 times more throughput, respectively, compared to that of non-parallel (sequential) stream join computation where there is one processing thread.
7

A Dynamic Attribute-Based Load Shedding and Data Recovery Scheme for Data Stream Management Systems

Ahuja, Amit 29 June 2006 (has links) (PDF)
Data streams being transmitted over a network channel with capacity less than the data rate of the data streams is very common when using network channels such as dial-up, low bandwidth wireless links. Not only does this lower capacity creates delays but also causes sequential network problems such as packet losses, network congestion, errors in data packets giving rise to other problems and creating a cycle of problems hard to break out from. In this thesis, we present a new approach for shedding the less informative attribute data from a data stream with a fixed schema to maintain a data rate lesser than the network channels capacity. A scheme for shedding attributes, instead of tuples, becomes imperative in stream data where the data for one of the attributes remains relatively constant or changes less frequently compared to the data for the other attributes. In such a data stream management system, shedding a complete tuple would lead to shedding of some informative-attribute data along with the less informative-attribute data in the tuple, whereas shedding of the less informative-attribute data would cause only the less informative data to be dropped. In this thesis, we deal with two major problems in load shedding: the intra-stream load shedding and the inter-stream load shedding problems. The intra-stream load shedding problem deals with shedding of the less informative attributes when a single data stream with the data rate greater than the channel capacity has to be transmitted to the destination over the channel. The inter-stream load shedding problem refers to shedding of attributes among different streams when more than one stream has to be transferred to the destination over a channel with the channel capacity less than the combined data rate of all the streams to be transmitted. As a solution to the inter-stream or intra-stream load shedding problem, we apply our load shedding schema approach to determine a ranking amongst the attributes on a singe data stream or multiple data streams with the least informative attribute(s) being ranked the highest. The amount of data to be shed to maintain the data rate below the capacity is calculated dynamically, which means that the amount of data to be shed changes with any change in the channel capacity or any change in the data rate. Using these two pieces of information, a load shedding schema describing the attributes to be shed is generated. The load shedding schema is generated dynamically, which means that the load shedding schema is updated with any change in (i) the rankings of attributes that capture the rate of change on the values of each attribute, (ii) channel capacity, and (iii) data rate even after load shedding has been invoked. The load shedding schema is updated using our load shedding schema re-evaluation algorithm, which adapts to the data stream characteristics and follows the attribute data variation curve of the data stream. Since data dropped at the source may be of interest to the user at the destination, we also propose a recovery module which can be invoked to recover attribute data already shed. The recovery module maintains the minimal amount of information about data already shed for recovery purpose. Preliminary experimental results have shown that recovery accuracy ranges from 90% to 99%, which requires only 5% to 33% and 4.88% to 50% of the dropped data to be stored for weather reports and stock exchanges, respectively. Storing of recovery information imposes storage and processing burden on the source site, and our recovery method aims at satisfactory recovery accuracy while imposing minimal burden on the source site. Our load shedding approach, which achieves a high performance in reducing the data stream load, (i) handles wide range of data streams in different application domains (such as weather, stocks, and network performance, etc.), (ii) is dynamic in nature, which means that the load shedding scheme adjusts the amount of data to be shed and which attribute data to be shed according to the current load and network capacity, and (iii) provides a data recovery mechanism that is capable to recover any shedded attribute data with recovery accuracy up to 90% with very low burden on the source site and 99% with a higher burden on some stream data. To the best of our knowledge, the dynamic load shedding scheme we propose is the first one in the literature to shed attributes, instead of tuples, along with providing a recovery mechanism in a data stream management system. Our load shedding approach is unique since it is not a static load shedding schema, which is less appealing in an ever-changing (sensor) network environment, and is not based on queries, but works on the general characteristics of the data stream under consideration instead.
8

VALUE STREAM ANALYSIS AT ROL PRODUCTION

Edh, Nina January 2010 (has links)
No description available.
9

Laufzeitadaption von zustandsbehafteten Datenstromoperatoren

Wolf, Bernhard 04 December 2013 (has links) (PDF)
Änderungen von Datenstromanfragen zur Laufzeit werden insbesondere durch zustandsbehaftete Datenstromoperatoren erschwert. Da die Zustände im Arbeitsspeicher abgelegt sind und bei einem Neustart verloren gehen, wurden in der Vergangenheit Migrationsverfahren entwickelt, um die inneren Operatorzustände bei einem Änderungsvorgang zu erhalten. Die Migrationsverfahren basieren auf zwei unterschiedlichen Ansätzen - Zustandstransfer und Parallelausführung - sind jedoch aufgrund ihrer Realisierung auf eine zentrale Ausführung beschränkt. Mit wachsenden Anforderungen in Bezug auf Datenmengen und Antwortzeiten werden Datenstromsysteme vermehrt verteilt ausgeführt, beispielsweise durch Sensornetze oder verteilte IT-Systeme. Zur Anpassung der Anfragen zur Laufzeit sind existierende Migrationsstrategien nicht oder nur bedingt geeignet. Diese Arbeit leistet einen Beitrag zur Lösung dieser Problematik und zur Optimierung der Migration in Datenstromsystemen. Am Beispiel von präventiven Instandhaltungsstrategien in Fabrikumgebungen werden Anforderungen für die Datenstromverarbeitung und insbesondere für die Migration abgeleitet. Das generelle Ziel ist demnach eine möglichst schnelle Migration bei gleichzeitiger Ergebnisausgabe. In einer detaillierten Analyse der existierenden Migrationsstrategien werden deren Stärken und Schwächen bezüglich der gestellten Anforderungen diskutiert. Für die Adaption von laufenden Datenstromanfragen wird eine allgemeine Methodik vorgestellt, welche als Basis für die neuen Strategien dient. Diese Adaptionsmethodik unterstützt zwei Verfahren zur Bestimmung von Migrationskonfigurationen - ein numerisches Verfahren für periodische Datenströme und ein heuristisches Verfahren, welches auch auf aperiodische Datenströme angewendet werden kann. Eine wesentliche Funktionalität zur Minimierung der Migrationsdauer ist dabei die Beschränkung auf notwendige Zustandswerte, da in verteilten Umgebungen eine Übertragungszeit für den Zustandstransfer veranschlagt werden muss - zwei Aspekte, die bei existierenden Verfahren nicht berücksichtigt werden. Durch die Verwendung von neu entwickelten Zustandstransfermethoden kann zudem die Übertragungsreihenfolge der einzelnen Zustandswerte beeinflusst werden. Die Konzepte wurden in einem OSGi-basierten Prototyp implementiert und zudem simulativ analysiert. Mit einer umfassenden Evaluierung wird die Funktionsfähigkeit aller Komponenten und Konzepte demonstriert. Der Performance-Vergleich zwischen den existierenden und den neuen Migrationsstrategien fällt deutlich zu Gunsten der neuen Strategien aus, die zudem in der Lage sind, alle Anforderungen zu erfüllen.
10

Desenvolupament d'un sistema expert com a eina per a una millor gestió de la qualitat de les aigües fluvials

Llorens i Ribes, Esther 18 June 2004 (has links)
Avui en dia no es pot negar el fet que els humans són un component més de les conques fluvials i que la seva activitat afecta enormement la qualitat de les aigües. A nivell europeu, l'elevada densitat de població situada en les conques fluvials ha comportat un increment de la mala qualitat de les seves aigües fluvials. En les darreres dècades l'increment de les càrregues de nutrients en els sistemes aquàtics ha esdevingut un problema prioritari a solucionar per les administracions competents en matèria d'aigua.La gestió dels ecosistemes fluvials no és una tasca fàcil. Els gestors es troben amb què són sistemes molt complexos, donada l'estreta relació existent entre els ecosistemes fluvials i els ecosistemes terrestres que drenen. Addicionalment a la complexitat d'aquests sistemes es troba la dificultat associada de la gestió o control de les entrades de substàncies contaminants tant de fonts puntuals com difoses. Per totes aquestes raons la gestió de la qualitat de les aigües fluvials esdevé una tasca complexa que requereix un enfocament multidisciplinar. Per tal d'assolir aquest enfocament diverses eines han estat utilitzades, des de models matemàtics fins a sistemes experts i sistemes de suport a la decisió. Però, la major part dels esforços han estat encarats cap a la resolució de problemes de reduïda complexitat, fent que molts dels problemes ambientals complexos, com ara la gestió dels ecosistemes fluvials, no hagin estat vertaderament tractats. Per tant, es requereix l'aplicació d'eines que siguin de gran ajuda en els processos de presa de decisions i que incorporin un ampli coneixement heurístic i empíric: sistemes experts i sistemes de suport a la decisió. L'òptima gestió de la qualitat de l'aigua fluvial requereix una aproximació integrada i multidisciplinar, que pot ésser aconseguida amb una eina intel·ligent construïda sobre els conceptes i mètodes del raonament humà. La present tesi descriu la metodologia desenvolupada i aplicada per a la creació i construcció d'un Sistema Expert, així com el procés de desenvolupament d'aquest Sistema Expert, com el principal mòdul de raonament d'un Sistema de Suport a la Decisió Ambiental. L'objectiu principal de la present tesi ha estat el desenvolupament d'una eina d'ajuda en el procés de presa de decisions dels gestors de l'aigua en la gestió de trams fluvials alterats antròpicament per tal de millorar la qualitat de la seva aigua fluvial. Alhora, es mostra el funcionament de l'eina desenvolupada a través de dos casos d'estudi.Els resultats derivats del Sistema Expert desenvolupat, implementat i presentat en la present tesi mostren que aquests sistemes poden ésser eines útils per a millorar la gestió dels ecosistemes fluvials. / Nowadays it is not possible to deny the fact that the human are one more component of river basins and that their activity affects enormously the water quality. To European level, the high density of population placed in the river basins has supposed an increase of the bad river water quality. In the last decades the increase of nutrient loads in the aquatic systems has turned into a priority problem to solve for the competent water agencies.The management of the fluvial ecosystems is not an easy task. They are complex systems due to the narrow relationship among fluvial and terrestrial ecosystems. Additional to the complexity of these systems, one finds the associate difficulty of the management or control of pollutant inputs (from point and/or non-point sources). For all these reasons the management of the river water quality is a complex task that requires a multidisciplinary approach.With the aim to reach this approach, different tools have been used, from mathematical models to expert systems and decision support systems. However, most of the efforts have been directed to the resolution of limited complexity problems, doing that many of the environmental complex problems, as river ecosystems management, have not been really treated. For this reason, there is needed the application of helping tools in the decision-making processes and that incorporate wide heuristic and empirical knowledge: expert systems and decision support systems. The ideal management of the river water quality requires an integrated and multidisciplinary approach, which can be reached by an intelligent tool based on the concepts and methods of the human reasoning.The present thesis describes the methodology developed and applied in the building of an Expert System, as well as the development process of this Expert System, as the main module of reasoning of an Environmental Decision Support System. The main objective of the present thesis has been the development of a tool to help water managers in the decision-making processes to improve water quality of altered reaches. In addition, the thesis shows the functioning of the developed tool by means two study cases.The results derived from the developed, implemented and presented Expert System show that these systems can be useful tools to improve the management of fluvial ecosystems.

Page generated in 0.0691 seconds