• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 11
  • 9
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 32
  • 31
  • 26
  • 24
  • 21
  • 19
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Performance analysis of on- device streaming speech recognition

Köling, Martin January 2021 (has links)
Speech recognition is the task where a machine processes human speech into a written format. Groundbreaking scientific progress within speech recognition has been fueled by recent advancements in deep learning research, improving both key metrics of the task; accuracy and speed. Traditional speech recognition systems listen to, and analyse, the full speech utterance before making an output prediction. Streaming speech recognition on the other hand makes predictions in real- time, word by word, as speech is received. However, the improved speed of streaming speech recognition comes at a cost of reduced accuracy given the constraint of not having access to the full speech utterance at all time. In this thesis, we investigate the accuracy of streaming speech recognition systems by implementing models with state-of-the-art Transformer-based architectures. Our results show that for two similar models, one streaming, the other non-streaming, trained on a 100hr subset of Libirspeech, achieve a word error rate of 9.99%/10.76% on test- clean without using a language model. This puts the cost of streaming at a 7.2% accuracy degradation. Furthermore, the streaming models can be used “on-device” which has many benefits, including lower inference time, privacy preservation, and the ability to operate without an internet connection. / Taligenkänning är uppgiften där en dator bearbetar mänskligt tal till ett skrivet format. Forskning inom taligenkänning har drivits av de senaste framstegen inom forskning i djupinlärning, vilket har lett till att de två viktigaste mätvärdena, träffsäkerhet och hastighet, har förbättrats. Traditionella taligenkänningssystem lyssnar till och analyserar hela talsekvensen innan en prediktion görs. Strömmande taligenkänning å andra sidan gör realtids prediktioner, ord för ord, när tal tas emot. Den ökade hastigheten som strömmande taligenkänning medför kommer på bekostnad av träffsäkerhet då tillgången till hela talsekvensen inte alltid är tillgänglig. I den här avhandlingen undersöker vi träffsäkerhet av strömmande taligenkänningssystem genom att implementera ”Transformer”- baserade arkitekturer. Våra resultat visar att för två liknande modeller, en strömmande, och en icke- strömmande, tränade på 100 timmar av datasetet Librispeech, når en ordfelfrekvens på 9.99%/10.76% på ”test-clean”. Det gör att strömmande taligenkänning kommer på en bekostnad av 7.2% träffsäkerhet jämfört med icke- strömmande. De strömmande taligenkänningsmodellerna kan användas ”on-device” vilket främjar lägre slutledningstider, sekretessbevarande och förmågan att fungera utan internetanslutning.
52

CheesePi: Delay Characterization through TCP-based Analysis from End-to-End Monitoring

Portelli, Rebecca January 2016 (has links)
With increasing access to interconnected IP networks, people demand a faster response time from Internet services. Traffic from web browsing, the second most popular service, is particularly time-sensitive. This demands reliability and a guarantee of delivery with a good quality of service from ISPs. Additionally, the majority of the population do not have the technical background to monitor the delay themselves from their home networks, and their ISPs do not have a vantage point to monitor and diagnose network problems from the users’ perspective. Hence, the aim for this research was to characterise the “in-protocol” network delay encountered during web browsing from within a LAN. This research presents TCP traffic monitoring performed on a client device as well as TCP traffic monitoring over both the client-end and the server-end devices separately observing an automated web client/server communication. This was followed by offline analysis of the captured traces where each TCP flow was dissected into: handshake, data transfer, and teardown phases. The aim behind such extraction was to enable characterisation of network round-trip delay as well as network physical delay, end host processing delay, web transfer delay, and packets lost as perceived by the end hosts during data transfer. The outcome of measuring from both end devices showed that monitoring from both ends of a client/server communication results to a more accurate measurement of the genuine delay encountered when packets traverse the network than when measuring from the client-end only. Primarily, this was concluded through the ability to distinguish between the pure network delay and the kernel processing delay experienced during the TCP handshake and teardown. Secondly, it was confirmed that the two RTTs identified in a TCP handshake are not symmetrical and that a TCP teardown RTT takes longer than the TCP handshake RTT within the same TCP flow since a server must take measures to avoid SYN flooding attacks. Thirdly, by monitoring from both end devices, it was possible to identify routing path asymmetries by calculating the physical one-way delay a packet using the forward path in comparison to the physical delay of a packet using the reverse path. Lastly, by monitoring from both end devices, it is possible to distinguish between a packet that was actually lost and a packet that arrived with a higher delay than its subsequent packet during data transfer. Furthermore, utilizing TCP flows to measure the RTT delay excluding end host processing gave a better characterisation of the RTT delay as opposed to using ICMP traffic. / Med ökande tillgång till den sammankopplade IP-nätet, krävs det en snabbare responstid från Internettjänster. Trafik från surfning, den näst mest populära tjänsten är särskilt tidskänsliga. Detta kräver tillförlitlighet och en garanti för data leverans med en god servicekvalitet från Internetleverantörer. Dessutom har de flesta av befolkningen inte den tekniska bakgrunden för att övervaka fördröjning sig från sina hemmanätverk, och deras Internetleverantörer har ingen utsiktspunkt för att övervaka och diagnostisera nätverksproblem från användarnas perspektiv. Därför syftet med denna forskning är att karakterisera “in-protokoll”  fördöljingen i nätet, som påträffas under surfning inifrån ett LAN. Denna forskning visar TCP-trafik monitoring som utförs på en klientenhet, samt separat TCP-trafik monitoring över både klient-end och serve-end enheter, för att observera en automatiserad webbklient / server-kommunikation. Detta följs av offline analys av de infångade tracer där varje TCP flöde dissekerades in: handskakning, dataöverföring, och nedkoppling faser. Syftet bakom sådan utvinning är att möjliggöra karakterisering av nätverk fördröjning samt nätverkets fysiska fördröjning, behandlingsfördröjning, webböverföringsfördröjning och förlorade paket som uppfattas av end-device under dataöverföring. The outcome of measuring from both end devices showed that monitoring from both ends of a client/server communication results to a more accurate measurement of the genuine delay encountered when packets traverse the network than when measuring from the client-end only. Primarily, this was concluded through the ability to distinguish between the pure network delay and the kernel processing delay experienced during the TCP handshake and teardown. Secondly, it was confirmed that the two RTTs identified in a TCP handshake are not symmetrical and that a TCP teardown RTT takes longer than the TCP handshake RTT within the same TCP flow since a server must take measures to avoid SYN flooding attacks. Thirdly, by monitoring from both end devices, it was possible to identify routing path asymmetries by calculating the physical one-way delay a packet using the forward path in comparison to the physical delay of a packet using the reverse path. Lastly, by monitoring from both end devices, it is possible to distinguish between a packet that was actually lost and a packet that arrived with a higher delay than its subsequent packet during data transfer. Furthermore, utilizing TCP flows to measure the RTT delay excluding end host processing gave a better characterisation of the RTT delay as opposed to using ICMP traffic. Resultatet av mätningarna från både slut-enheter visar att övervakning från båda ändar av en klient / server-kommunikation resulterar  en noggrannare mätning av fördröjningar som uppstår när paketen färdas över nätverket än vid mätning från den enda klienten. Främst avslutades detta genom förmågan att skilja mellan den rena nätfördröjningen och kernel bearbetning under TCP handskakning och nedkoppling. För det andra bekräftades att de två RTT som identifierats i en TCP handskakning inte är symmetriska och att TCP nedkoppling RTT är längre än TCP handskakning RTT inom samma TCP flödet, eftersom servern  måste vidta åtgärder för att undvika SYN översvämning attacker. För det tredje, genom att övervaka från båda avancerade enheter, var det möjligt att identifiera path asymmetrier genom att beräkna den fysiska envägsfördröjningen av ett paket på framåtriktade banan i jämförelse med den fysiska fördröjningen för ett paket på den omvända banan. Slutligen genom att övervaka från båda end enheter, är det möjligt att skilja mellan ett paket som faktiskt förlorades och ett paket som kom med en högre fördröjning än dess efterföljande paket under dataöverföring. Dessutom utnyttjande av TCP flöden för att mäta RTT exkluderat end-nod porocessering gav en bättre karakterisering av RTT fördröjning jämfört med att ICMP-trafik.
53

Detection and localization of link-level network anomalies using end-to-end path monitoring

Salhi, Emna 13 February 2013 (has links) (PDF)
The aim of this thesis is to come up with cost-efficient, accurate and fast schemes for link-level network anomaly detection and localization. It has been established that for detecting all potential link-level anomalies, a set of paths that cover all links of the network must be monitored, whereas for localizing all potential link-level anomalies, a set of paths that can distinguish between all links of the network pairwise must be monitored. Either end-node of each path monitored must be equipped with a monitoring device. Most existing link-level anomaly detection and localization schemes are two-step. The first step selects a minimal set of monitor locations that can detect/localize any link-level anomaly. The second step selects a minimal set of monitoring paths between the selected monitor locations such that all links of the network are covered/distinguishable pairwise. However, such stepwise schemes do not consider the interplay between the conflicting optimization objectives of the two steps, which results in suboptimal consumption of the network resources and biased monitoring measurements. One of the objectives of this thesis is to evaluate and reduce this interplay. To this end, one-step anomaly detection and localization schemes that select monitor locations and paths that are to be monitored jointly are proposed. Furthermore, we demonstrate that the already established condition for anomaly localization is sufficient but not necessary. A necessary and sufficient condition that minimizes the localization cost drastically is established. The problems are demonstrated to be NP-Hard. Scalable and near-optimal heuristic algorithms are proposed.
54

Improved algorithms for TCP congestion control

Edwan, Talal A. January 2010 (has links)
Reliable and efficient data transfer on the Internet is an important issue. Since late 70's the protocol responsible for that has been the de facto standard TCP, which has proven to be successful through out the years, its self-managed congestion control algorithms have retained the stability of the Internet for decades. However, the variety of existing new technologies such as high-speed networks (e.g. fibre optics) with high-speed long-delay set-up (e.g. cross-Atlantic links) and wireless technologies have posed lots of challenges to TCP congestion control algorithms. The congestion control research community proposed solutions to most of these challenges. This dissertation adds to the existing work by: firstly tackling the highspeed long-delay problem of TCP, we propose enhancements to one of the existing TCP variants (part of Linux kernel stack). We then propose our own variant: TCP-Gentle. Secondly, tackling the challenge of differentiating the wireless loss from congestive loss in a passive way and we propose a novel loss differentiation algorithm which quantifies the noise in packet inter arrival times and use this information together with the span (ratio of maximum to minimum packet inter arrival times) to adapt the multiplicative decrease factor according to a predefined logical formula. Finally, extending the well-known drift model of TCP to account for wireless loss and some hypothetical cases (e.g. variable multiplicative decrease), we have undertaken stability analysis for the new version of the model.
55

Analyse pire cas de flux hétérogènes dans un réseau embarqué avion / Heterogeneous flows worst case analysis in avionics embedded networks

Bauer, Henri 04 October 2011 (has links)
La certification des réseaux avioniques requiert une maîtrise des délais de transmission des données. Cepednant, le multiplexage et le partage des ressource de communications dans des réseaux tels que l'AFDX (Avionics Full Duplex Switched Ethernet) rendent difficile le calcul d'un délai de bout en bout pire cas pour chaque flux. Des outils comme le calcul réseau fournissent une borne supérieure (pessimiste) de ce délai pire cas. Les besoins de communication des avions civils modernes ne cessent d'augmenter et un nombre croissant de flux aux contraintes et aux caractéristiques différentes doivent partager les ressources existantes. Le réseau AFDX actuel ne permet pas de différentier plusieurs classes de trafic : les messages sont traités dans les files des commutateurs selon leur ordre d'arrivée (politique de service FIFO). L'objet de cette thèse est de montrer qu'il est possible de calculer des bornes pire cas des délais de bout en bout avec des politiques de service plus évoluées, à base de priorités statiques (Priority Queueing) ou à répartition équitable de service (Fair Queueing). Nous montrons comment l'approche par trajectoires, issue de la théorie de l'ordonnancement dans des systèmes asynchrones distribués peut s'appliquer au domaine de l'AFDX actuel et futur (intégration de politiques de service plus évoluées permettant la différentiation de flux). Nous comparons les performances de cette approche avec les outils de référence lorsque cela est possible et étudions le pessimisme des bornes ainsi obtenues. / The certification process for avionics network requires guaranties on data transmission delays. However, calculating the worst case delay can be complex in the case of industrial AFDX (Avionics Full Duplex Switched Ethernet) networks. Tools such as Network Calculus provide a pessimistic upper bound of this worst case delay. Communication needs of modern commercial aircraft are expanding and a growing number of flows with various constraints and characteristics must share already existing resources. Currently deployed AFDX networks do not differentiate multiple classes of traffic: messages are processed in their arrival order in the output ports of the switches (FIFO servicing policy). The purpose of this thesis is to show that it is possible to provide upper bounds of end to end transmission delays in networks that implement more advanced servicing policies, based on static priorities (Priority Queuing) or on fairness (Fair Queuing). We show how the trajectory approach, based on scheduling theory in asynchronous distributed systems can be applied to current and future AFDX networks (supporting advanced servicing policies with flow differentiation capabilities). We compare the performance of this approach with the reference tools whenever it is possible and we study the pessimism of the computed upper bounds.
56

When Decision Meets Estimation: Theory and Applications

Yang, Ming 15 December 2007 (has links)
In many practical problems, both decision and estimation are involved. This dissertation intends to study the relationship between decision and estimation in these problems, so that more accurate inference methods can be developed. Hybrid estimation is an important formulation that deals with state estimation and model structure identification simultaneously. Multiple-model (MM) methods are the most widelyused tool for hybrid estimation. A novel approach to predict the Internet end-to-end delay using MM methods is proposed. Based on preliminary analysis of the collected end-to-end delay data, we propose an off-line model set design procedure using vector quantization (VQ) and short-term time series analysis so that MM methods can be applied to predict on-line measurement data. Experimental results show that the proposed MM predictor outperforms two widely used adaptive filters in terms of prediction accuracy and robustness. Although hybrid estimation can identify model structure, it mainly focuses on the estimation part. When decision and estimation are of (nearly) equal importance, a joint solution is preferred. By noticing the resemblance, a new Bayes risk is generalized from those of decision and estimation, respectively. Based on this generalized Bayes risk, a novel, integrated solution to decision and estimation is introduced. Our study tries to give a more systematic view on the joint decision and estimation (JDE) problem, which we believe the work in various fields, such as target tracking, communications, time series modeling, will benefit greatly from. We apply this integrated Bayes solution to joint target tracking and classification, a very important topic in target inference, with simplified measurement models. The results of this new approach are compared with two conventional strategies. At last, a surveillance testbed is being built for such purposes as algorithm development and performance evaluation. We try to use the testbed to bridge the gap between theory and practice. In the dissertation, an overview as well as the architecture of the testbed is given and one case study is presented. The testbed is capable to serve the tasks with decision and/or estimation aspects, and is helpful for the development of the JDE algorithms.
57

Modeling and evaluation of the end-to-end delay in wireless sensor networks / Modélisation et évaluation des délais de bout-en-bout dans les réseaux de capteurs sans-fil

Despaux, François 15 September 2015 (has links)
Dans cette thèse, nous proposons une nouvelle approche pour modéliser et estimer les délais de bout-en-bout dans les réseaux de capteurs sans-fil (WSN). Notre approche combine les approches analytiques et expérimentales pour inférer un modèle Markovien modélisant le comportement d'un protocole de contrôle d'accès au médium (MAC) exécuté sur les nœuds d'un réseau de capteurs. À partir de ce modèle Markovien, le délai de bout en bout est ensuite obtenu par une approche analytique basée sur une analyse dans le domaine fréquentiel pour calculer la probabilité de distribution de délais pour un taux d'arrivée spécifique. Afin d’obtenir une estimation du délai de bout en bout, indépendamment du trafic en entrée, la technique de régression non-linéaire est utilisée sur un ensemble d’échantillons limités. Cette approche nous a permis de contourner deux problèmes: 1) la difficulté d'obtenir un modèle Markovien du comportement d’un protocole MAC en tenant compte de son implémentation réelle, 2) l'estimation du délai de bout-en-bout d’un WSN multi-sauts. L'approche a été validée sur un testbed réel (IOT-LAB) et pour plusieurs protocoles (X-MAC, ContikiMAC, IEEE 802.15.4) ainsi que pour un protocole de routage (RPL). / In this thesis, we propose an approach that combines both measurements and analytical approaches for infering a Markov chain model from the MAC protocol execution traces in order to be able to estimate the end to end delay in multi-hop transmission scenarios. This approach allows capturing the main features of WSN. Hence, a suitable Markov chain for modelling the WSN is infered. By means of an approach based on frequency domain analysis, end to end delay distribution for multi-hop scenarios is found. This is an important contribution of our approach with regard to existing analytical approaches where the extension of these models for considering multi-hop scenarios is not possible due to the fact that the arrival distribution to intermediate nodes is not known. Since local delay distribution for each node is obtained by analysing the MAC protocol execution traces for a given traffic scenario, the obtained model (and therefore, the whole end to end delay distribution) is traffic-dependant. In order to overcome this problem, we have proposed an approach based on non-linear regression techniques for generalising our approach in terms of the traffic rate. Results were validated for different MAC protocols (X-MAC, ContikiMAC, IEEE 802.15.4) as well as a well-known routing protocol (RPL) over real test-beds (IOT-LAB).
58

Software-defined datacenter network debugging

Tammana, Praveen Aravind Babu January 2018 (has links)
Software-defined Networking (SDN) enables flexible network management, but as networks evolve to a large number of end-points with diverse network policies, higher speed, and higher utilization, abstraction of networks by SDN makes monitoring and debugging network problems increasingly harder and challenging. While some problems impact packet processing in the data plane (e.g., congestion), some cause policy deployment failures (e.g., hardware bugs); both create inconsistency between operator intent and actual network behavior. Existing debugging tools are not sufficient to accurately detect, localize, and understand the root cause of problems observed in a large-scale networks; either they lack in-network resources (compute, memory, or/and network bandwidth) or take long time for debugging network problems. This thesis presents three debugging tools: PathDump, SwitchPointer, and Scout, and a technique for tracing packet trajectories called CherryPick. We call for a different approach to network monitoring and debugging: in contrast to implementing debugging functionality entirely in-network, we should carefully partition the debugging tasks between end-hosts and network elements. Towards this direction, we present CherryPick, PathDump, and SwitchPointer. The core of CherryPick is to cherry-pick the links that are key to representing an end-to-end path of a packet, and to embed picked linkIDs into its header on its way to destination. PathDump is an end-host based network debugger based on tracing packet trajectories, and exploits resources at the end-hosts to implement various monitoring and debugging functionalities. PathDump currently runs over a real network comprising only of commodity hardware, and yet, can support surprisingly a large class of network debugging problems with minimal in-network functionality. The key contributions of SwitchPointer is to efficiently provide network visibility to end-host based network debuggers like PathDump by using switch memory as a "directory service" - each switch, rather than storing telemetry data necessary for debugging functionalities, stores pointers to end hosts where relevant telemetry data is stored. The key design choice of thinking about memory as a directory service allows to solve performance problems that were hard or infeasible with existing designs. Finally, we present and solve a network policy fault localization problem that arises in operating policy management frameworks for a production network. We develop Scout, a fully-automated system that localizes faults in a large scale policy deployment and further pin-points the physical-level failures which are most likely cause for observed faults.
59

Modélisation "end-to-end" pour une approche écosystémique des pêches dans le Nord courant de Humboldt / End-to-end modelling for an Ecosystem Approach to Fisheries in the Humboldt Current Ecosystem

Oliveros Ramos, David Ricardo 08 December 2014 (has links)
Ce travail représente une contribution originale à la méthodologie pour le développement de modèles écosystémiques ainsi qu'une première tentative d'une modélisation end-to-end (E2E) de l'écosystème du Courant de Humboldt Nord (ECHN). L'objectif principal du modèle développé dans cette thèse est de construire un outil de gestion écosystémique et d'aide à la décision; raison pour laquelle la crédibilité du modèle est essentielle, laquelle peut-être établie par confrontation aux données. En outre, le ECHN présente une grande variabilité climatique et océanographique à différentes échelles, la source principale de variation inter-annuelle étant l'interruption du cycle d'upwelling saisonnier par l'Oscillation Australe du phénomène El Niño qui a un effet direct sur la survie larvaire et le succès de recrutement des poissons. La pêche peut aussi être fortement variable, en fonction de l'abondance et de l'accessibilité des principales ressources halieutiques. Ce contexte amène deux questions méthodologiques principales que nous explorons dans cette thèse à travers le développement d'un modèle E2E qui couple le modèle OSMOSE, pour la partie haut niveau trophique, au modèle ROMS-PISCES, pour les parties hydrodynamique et biogéochimie:(i) Comment calibrer un modèle écosystémique à partir de séries temporelles de données? (ii) Comment inclure l'impact de la variabilité inter-annuelle de l'environnement et de la pêche? En premier lieu, cette thèse met en évidence plusieurs problèmes liés à la confrontation de modèles écosystémiques complexes aux données et propose une méthodologie pour une calibration séquentielle en plusieurs phases des modèles écosystémiques. Nous proposons deux critères pour classer les paramètres d'un modèle: la dépendance au modèle et la variabilité temporelle des paramètres. A partir de ces critères, et en tenant compte de l'existence d'estimations initiales, on énonce des règles qui permettent de déterminer quels paramètres doivent être estimés, et dans quel ordre, dans le processus de calibration séquentiel. De plus, un nouvel Algorithme Évolutionnaire, conçu pour la calibration de modèles stochastiques et optimisé pour l'estimation du maximum de vraisemblance, a été développé et utilisé pour la calibration du modèle OSMOSE avec des séries temporelles de données.La variabilité environnementale est explicite dans le modèle: le modèle ROMS-PISCES force le modèle OSMOSE et propage les effets bottom-up potentiels dans le réseau trophique à travers les interactions trophiques entre plancton et poisson d'une part, et les changements dans la distribution spatiale du poisson d'autre part. Cette dynamique spatiale des poissons est prise en compte par l'utilisation de modèles de distribution des espèces de type présence/absence, qui sont en général évalués grâce à une matrice de confusion et les indicateurs statistiques qui lui sont associés. Toutefois, quand on considère la prédiction d'un habitat au cours du temps, la variabilité de la distribution spatiale des habitats peut être résumée de manière complémentaire et validée en utilisant les patrons émergents de la forme des distributions spatiales. Nous avons modélisé l'habitat potentiel des principales espèces du ECHN en utilisant plusieurs sources d'information (pêches commerciales, campagnes scientifiques et suivi satellite des navires de pêche) conjointement aux données environnementales issues d'observations satellites et in-situ, de 1992 à 2008. L'habitat potentiel est estimé sur cette période d'étude avec une résolution mensuelle, et le modèle est validé à partir d'informations du système, en utilisant une approche pattern-oriented.Le modèle écosystémique E2E ROMS-PISCES-OSMOSE pour le ECHN est calibré en une approche par maximum de vraisemblance pour ajuster des séries temporelles mensuelles de 1992 à 2008. En conclusion,quelques applications potentielles du modèle pour la gestion des pêches sont présentées et nous discutons leurs limitations et les perspectives. / This work represents an original contribution to the methodology for ecosystem models' development as well as the first attempt of an end-to-end (E2E) model for the Northern Humboldt Current Ecosystem (NHCE). The main purpose of the developed model is to build a tool for ecosystem-based management and decision making, reason why the credibility of the model is essential, and this can be assessed through confrontation to data. Additionally, the NHCE exhibits a high climatic and oceanographic variability at several scales, the major source of interannual variability being the interruption of the upwelling seasonality by the El Nino Southern Oscillation, which has direct effects on larval survival and fish recruitment success. Fishing activity can also be highly variable, depending on the abundance and accessibility of the main fishery resources. This context brings the two main methodological questions addressed in this thesis, through the development of an end-to-end model coupling the high trophic level model OSMOSE to the hydrodynamics and biogeochemical model ROMS-PISCES: i) how to calibrate ecosystem models using time series data and ii) how to incorporate the impact of the interannual variability of the environment and fishing.First, this thesis highlights some issues related to the confrontation of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration of ecosystem models. We propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria along with the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. Additionally, a new Evolutionary Algorithm designed for the calibration of stochastic models (e.g Individual Based Model) and optimized for maximum likelihood estimation has been developed and applied to the calibration of the OSMOSE model to data time series.The environmental variability is explicit in the model: the ROMS-PISCES model forces the OSMOSE model and drives potential bottom-up effects up the foodweb through plankton and fish trophic interactions, as well as through changes in the spatial distribution of fish. The latter effect was taken into account using presence/absence species distribution models which are traditionally assessed through a confusion matrix and the statistical metrics associated to it. However, when considering the prediction of the habitat against time, the variability in the spatial distribution of the habitat can be summarized and validated using the emerging patterns from the shape of the spatial distributions. We modeled the potential habitat of the main species of the Humboldt Current Ecosystem using several sources of information (fisheries, scientific surveys and satellite monitoring of vessels) jointly with environmental data from remote sensing and in situ observations, from 1992 to 2008. The potential habitat was predicted over the study period with monthly resolution, and the model was validated using quantitative and qualitative information of the system using a pattern oriented approach.The final ROMS-PISCES-OSMOSE E2E ecosystem model for the NHCE was calibrated using our evolutionary algorithm and a likelihood approach to fit monthly time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. To conclude, some applications of the model for fishery management are presented and their limitations and perspectives discussed.
60

Integrated Reliability and Availability Aanalysis of Networks With Software Failures and Hardware Failures

Hou, Wei 17 May 2003 (has links)
This dissertation research attempts to explore efficient algorithms and engineering methodologies of analyzing the overall reliability and availability of networks integrated with software failures and hardware failures. Node failures, link failures, and software failures are concurrently and dynamically considered in networks with complex topologies. MORIN (MOdeling Reliability for Integrated Networks) method is proposed and discussed as an approach for analyzing reliability of integrated networks. A Simplified Availability Modeling Tool (SAMOT) is developed and introduced to evaluate and analyze the availability of networks consisting of software and hardware component systems with architectural redundancy. In this dissertation, relevant research efforts in analyzing network reliability and availability are reviewed and discussed, experimental data results of proposed MORIN methodology and SAMOT application are provided, and recommendations for future researches in the network reliability study are summarized as well.

Page generated in 0.0449 seconds