• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 15
  • 15
  • 10
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Zeitreihenanalyse natuerlicher Systeme mit neuronalen Netzen und

Weichert, Andreas 27 February 1998 (has links)
No description available.
2

A Resilience-Oriented and NFV-Supported Scheme for Failure Detection in Software-Defined Networking

Li, He 19 October 2018 (has links)
As a recently emerging network paradigm, Software-Defined Networking (SDN) has attracted considerable attention from both industry and academia. The most significant advantage of SDN is that the paradigm disassociates the control logic (i.e., control plane) from the forwarding process (i.e., data plane), which are usually integrated into traditional network devices. Thanks to the property of centralized control, SDN enables the flexibility of dispatching flow policies to simplify network management. However, this property also makes the SDN environment vulnerable, which will cause network paralysis when the sole SDN controller runs malfunction. Although several works have been done on deploying multiple controllers to address the failure of a centralized controller, their drawbacks are leading to inefficiency and balance loss of controller utilization, provoking resource idling as well as being incapable to suffice flow outburst. Additionally, the network operators often put a great deal of effort into discovering failure nodes to recover their networks, which can be mitigated by applying failure detection before the network deterioration occurs. Network traffic prediction can serve as a practical approach to evaluate the state of the OpenFlow-based switch and consequently detect SDN node failures in advance. As far as prediction solution is concerned, most researchers investigate either statistical modeling approaches, such as Seasonal Autoregressive Integrated Moving Average (SARIMA), or Artificial Neural Network (ANN) methods, like Long Short-Term Memory (LSTM) Neural Network. Nonetheless, few of them study the model merging these two mechanisms regarding multi-step prediction. This thesis proposes a novel system associated with Network Function Virtualization (NFV) technique to enhance the resilience of SDN network. A hybrid prediction model based on the combination of SARIMA and LSTM is introduced as part of the detection module of this system, where the potential node breakdown can be readily determined so that it can implement smart prevention and fast recovery without human interaction. The results show the proposed scheme improves the performance concerning time complexity compared with that of previous work, reaching up to 95% accuracy while shortening the detection and recovery time by the new combined prediction model.
3

Analyzing molecular network perturbations in human cancer: application to mutated genes and gene fusions involved in acute lymphoblastic leukemia

Hajingabo, Leon 30 January 2015 (has links)
Le séquençage du génome humain et l'émergence de nouvelles technologies de génomique à haut débit, ont initié de nouveaux modèles d'investigation pour l'analyse systématique des maladies humaines. Actuellement, nous pouvons tenter de comprendre les maladies tel que le cancer avec une perspective plus globale, en identifiant des gènes responsables des cancers et en étudiant la manière dont leurs produits protéiques fonctionnent dans un réseau d’interactions moléculaires. Dans ce contexte, nous avons collecté les gènes spécifiquement liés à la Leucémie Lymphoblastique Aiguë (LLA), et identifié de nouveaux partenaires d'interaction qui relient ces gènes clés associés à la LLA tels que NOTCH1, FBW7, KRAS et PTPN11, dans un réseau d’interactions. Nous avons également tenté de prédire l’impact fonctionnel des variations génomiques tel que des fusions de gènes impliquées dans LLA. En utilisant comme modèles trois différentes translocations chromosomiques ETV6-RUNX1 (TEL-AML1), BCR-ABL1, et E2A-PBX1 (TCF3-PBX1) fréquemment identifiées dans des cellules B LLA, nous avons adapté une approche de prédiction d’oncogènes afin de prédire des perturbations moléculaires dans la LLA. Nous avons montré que les circuits transcriptomiques dépendant de Myc et JunD sont spécifiquement dérégulés suite aux fusions de gènes TEL-AML1 et TCF3-PBX1, respectivement. Nous avons également identifié le mécanisme de transport des ARNm dépendant du facteur NXF1 comme une cible directe de la protéine de fusion TCF3-PBX1. Grâce à cette approche combinant les données interactomiques et les analyses d'expression génique, nous avons fourni un nouvel aperçu à la compréhension moléculaire de la Leucémie Lymphoblastique Aiguë. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
4

Large scale platform : Instantiable models and algorithmic design of communication schemes

Uznanski, Przemyslaw 11 October 2013 (has links) (PDF)
The increasing popularity of Internet bandwidth-intensive applications prompts us to consider followingproblem: How to compute efficient collective communication schemes on large-scale platform?The issue of designing a collective communication in the context of a large scale distributed networkis a difficult and a multi-level problem. A lot of solutions have been extensively studied andproposed. But a new, comprehensive and systematic approach is required, that combines networkmodels and algorithmic design of solutions.In this work we advocate the use of models that are able to capture real-life network behavior,but also are simple enough that a mathematical analysis of their properties and the design of optimalalgorithms is achievable.First, we consider the problem of the measuring available bandwidth for a given point-topointconnection. We discuss how to obtain reliable datasets of bandwidth measurements usingPlanetLab platform, and we provide our own datasets together with the distributed software usedto obtain it. While those datasets are not a part of our model per se, they are necessary whenevaluating the performance of various network algorithms. Such datasets are common for latencyrelatedproblems, but very rare when dealing with bandwidth-related ones.Then, we advocate for a model that tries to accurately capture the capabilities of a network,named LastMile model. This model assumes that essentially the congestion happens at the edgesconnecting machines to the wide Internet. It has a natural consequence in a bandwidth predictionalgorithm based on this model. Using datasets described earlier, we prove that this algorithm is ableto predict with an accuracy comparable to best known network prediction algorithm (DistributedMatrix Factorization) available bandwidth between two given nodes. While we were unable toimprove upon DMF algorithm in the field of point-to-point prediction, we show that our algorithmhas a clear advantage coming from its simplicity, i.e. it naturally extends to the network predictionsunder congestion scenario (multiple connections sharing a bandwidth over a single link). We areactually able to show, using PlanetLab datasets, that LastMile prediction is better in such scenarios.In the third chapter, we propose new algorithms for solving the large scale broadcast problem.We assume that the network is modeled by the LastMile model. We show that under thisassumption, we are able to provide algorithms with provable, strong approximation ratios. Takingadvantage of the simplicity and elasticity of the model, we can even extend it, so that it captures theidea of connectivity artifacts, in our case firewalls preventing some nodes to communicate directlybetween each other. In the extended case we are also able to provide approximation algorithmswith provable performance.The chapters 1 to 3 form three successful steps of our program to develop from scratch amathematical network communication model, prove it experimentally, and show that it can beapplied to develop algorithms solving hard problems related to design of communication schemesin networks.In the chapter 4 we show how under different network cost models, using some simplifyingassumptions on the structure of network and queries, one can design very efficient communicationschemes using simple combinatorial techniques. This work is complementary to the previous chapter in the sense that previously when designing communication schemes, we assumed atomicityof connections, i.e. that we have no control over routing of simple connections. In chapter 4 weshow how to solve the problem of an efficient routing of network request, given that we know thetopology of the network. It shows the importance of instantiating the parameters and the structureof the network in the context of designing efficient communication schemes.
5

Algoritmické obchodování na burze s využitím dat z Twitteru / Algorithmic Trading Using Twitter Data

Kříž, Jakub January 2015 (has links)
This master's thesis describes creation of prediction system. This system predicts future market development based on stock exchange data and twitter messages analysis. Tweets from two different sources are analysed by mood dictionaries or via recurrent neural networks. This analysis results and technical analysis of stock exchange data results are used in multilayer neural network for prediction. A business strategy is created and tested based on results of this prediction. Design and implementation of prediction system is described in this thesis. This system achieved revenue increase more than 25 % of some business strategies by tweets analysis. However this improvement applies for certain data and timeframe.
6

FEED-FORWARD NEURAL NETWORK (FFNN) BASED OPTIMIZATION OF AIR HANDLING UNITS: A STATE-OF-THE-ART DATA-DRIVEN DEMAND-CONTROLLED VENTILATION STRATEGY

SAYEDMOHAMMADMA VAEZ MOMENI (9187742) 04 August 2020 (has links)
Heating, ventilation and air conditioning systems (HVAC) are the single largest consumer of energy in commercial and residential sectors. Minimizing its energy consumption without compromising indoor air quality (IAQ) and thermal comfort would result in environmental and financial benefits. Currently, most buildings still utilize constant air volume (CAV) systems with on/off control to meet the thermal loads. Such systems, without any consideration of occupancy, may ventilate a zone excessively and result in energy waste. Previous studies showed that CO<sub>2</sub>-based demand-controlled ventilation (DCV) methods are the most widely used strategies to determine the optimal level of supply air volume. However, conventional CO<sub>2</sub> mass balanced models do not yield an optimal estimation accuracy. In this study, feed-forward neural network algorithm (FFNN) was proposed to estimate the zone occupancy using CO<sub>2</sub> concentrations, observed occupancy data and the zone schedule. The occupancy prediction result was then utilized to optimize supply fan operation of the air handling unit (AHU) associated with the zone. IAQ and thermal comfort standards were also taken into consideration as the active constraints of this optimization. As for the validation, the experiment was carried out in an auditorium located on a university campus. The results revealed that utilizing neural network occupancy estimation model can reduce the daily ventilation energy by 74.2% when compared to the current on/off control.
7

DEEP SKETCH-BASED CHARACTER MODELING USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS

Aleena Kyenat Malik Aslam (14216159) 07 December 2022 (has links)
<p>3D character modeling is a crucial process of asset creation in the entertainment industry, particularly for animation and games. A fully automated pipeline via sketch-based 3D modeling (SBM) is an emerging possibility, but development is stalled by unrefined outputs and a lack of character-centered tools. This thesis proposes an improved method for constructing 3D character models with minimal user input, using only two sketch inputs  i.e., a front and side unshaded sketch. The system implements a deep convolutional neural network (CNN), a type of deep learning algorithm extending from artificial intelligence (AI), to process the input sketch and generate multi-view depth, normal and confidence maps that offer more information about the 3D surface. These are then fused into a 3D point cloud, which is a type of object representation for 3D space. This point cloud is converted into a 3D mesh via an occupancy network, involving another CNN, for a more precise 3D representation. This reconstruction step contends with non-deep learning approaches such as  Poisson reconstruction. The proposed system is evaluated for character generation on standardized quantitative metrics (i.e., Chamfer Distance [CD], Earth Mover’s Distance [EMD], F-score and Intersection of Union [IoU]), and compared to the base framework trained on the same character sketch and model database. This implementation offers a  significant improvement in the accuracy of vertex positions for the reconstructed character models. </p>
8

INTERFACE, PHASE CHANGE AND MOLECULAR TRANSPORT IN SUB, TRANS AND SUPERCRITICAL REGIMES FOR N-ALKANE/NITROGEN MIXTURES

Suman Chakraborty (13184898) 01 August 2022 (has links)
<p> Understanding the behavior of liquid hydrocarbon propellants under high pressure and temperature conditions is a crucial step towards improving the performance of modern-day combustion engines (liquid rocket engines, diesel engines, gas turbines and so on) and designing the next generation ones. Under such harsh thermodynamic conditions (high P&T) propellent droplets may experience anywhere from sub-to-trans-to-supercritical regime. The focus of this research is to explore the dynamics of the vapor-liquid two phase system formed by a liquid hydrocarbon fuel (n-heptane or n-dodecane) and ambient (nitrogen) over a wide range of P&T leading up to the mixture critical point and beyond. Molecular dynamics (MD) has been used as the primary tool in this research along with other tools like: phase stability calculations based on Gibb’s work, Peng Robinson equation of state, density gradient theory and neural networks.</p>
9

Large scale platform : Instantiable models and algorithmic design of communication schemes / Modélisation des communications sur plates-formes à grande echelles

Uznanski, Przemyslaw 11 October 2013 (has links)
La popularité croissante des applications Internet très gourmandes en bande passante (P2P, streaming,...) nous pousse à considérer le problème suivant :Comment construire des systèmes de communications collectives efficaces sur une plateforme à grande échelle ? Le développement de schéma de communications collectives dans le cadre d'un réseau distribué à grande échelle est une tâche difficile, qui a été largement étudiée et dont de multiples solutions ont été proposées. Toutefois, une nouvelle approche globale et systématique est nécessaire, une approche qui combine des modèles de réseaux et la conception algorithmique.Dans ce mémoire nous proposons l'utilisation de modèles capables de capturer le comportement d'un réseau réel et suffisamment simples pour que leurs propriétés mathématiques puissentêtre étudiées et pour qu'il soit possible de créer des algorithmesoptimaux. Premièrement, nous considérons le problème d'évaluation de la bande passante disponible pour une connexion point-à-point donnée. Nousétudions la façon d'obtenir des jeux de données de bande passante, utilisant plateforme PlanetLab. Nous présentons aussi nos propres jeux de données, jeux obtenus avec bedibe, un logiciel que nous avons développé. Ces données sont nécessaires pour évaluer les performances des différents algorithmesde réseau. Bien qu'on trouve de nombreux jeux de données de latence,les jeux de données de bande passante sont très rares. Nous présentons ensuite un modèle, appelé LastMile, qui estime la bande passante. En profitant des jeux de données décrits précédemment, nous montrons que cet algorithme est capable de prédire la bande passante entre deux noeuds donnés avec une précision comparable au meilleur algorithme connu de prédiction (DMF). De plus le modèle LastMile s'étend naturellement aux prédictions dans le scénario de congestion (plusieurs connexions partageant un même lien). Nous sommes effectivement en mesure de démontrer, à l'aide des ensembles de données PlanetLab, que la prédiction LastMile est préférable dans des tels scénarios.Dans le troisième chapitre, nous proposons des nouveaux algorithmes pour résoudre le problème de diffusion. Nous supposons que le réseau est modélisé par le modèle LastMile. Nous montrons que, sous cette hypothèse, nous sommes en mesure de fournir des algorithmes avec des ratios d'approximation élevés. De plus nous étendons le modèle LastMile, de manière à y intégrer des artéfacts de connectivité, dans notre cas ce sont des firewalls qui empêchent certains nœuds de communiquer directement entre eux. Dans ce dernier cas, nous sommes également en mesure de fournir des algorithmes d'approximation avec des garanties de performances prouvables. Les chapitres 1 à 3 forment les trois étapes accomplies de notre programme qui visent trois buts. Premièrement, développer à partir dezéro un modèle de réseau de communication. Deuxièmement, prouver expérimentalement sa performance. Troisièmement, montrer qu'il peut être utilisé pour développer des algorithmes qui résolvent les problèmes de communications collectives. Dans le 4e chapitre, nous montrons comment on peut concevoir dessystèmes de communication efficaces, selon différents modèles decoûts, en utilisant des techniques combinatoires,tout en utilisant des hypothèses simplificatrices sur la structure duréseau et les requêtes. Ce travail est complémentaire au chapitre précédent puisque auparavant, nous avons adopté l'hypothèse que les connectionsétaient autonomes (i.e. nous n'avons aucun contrôle sur le routage des connexions simples). Dans le chapitre 4, nous montrons comment résoudre le problème du routage économe en énergie, étant donnée une topologie fixée. / The increasing popularity of Internet bandwidth-intensive applications prompts us to consider followingproblem: How to compute efficient collective communication schemes on large-scale platform?The issue of designing a collective communication in the context of a large scale distributed networkis a difficult and a multi-level problem. A lot of solutions have been extensively studied andproposed. But a new, comprehensive and systematic approach is required, that combines networkmodels and algorithmic design of solutions.In this work we advocate the use of models that are able to capture real-life network behavior,but also are simple enough that a mathematical analysis of their properties and the design of optimalalgorithms is achievable.First, we consider the problem of the measuring available bandwidth for a given point-topointconnection. We discuss how to obtain reliable datasets of bandwidth measurements usingPlanetLab platform, and we provide our own datasets together with the distributed software usedto obtain it. While those datasets are not a part of our model per se, they are necessary whenevaluating the performance of various network algorithms. Such datasets are common for latencyrelatedproblems, but very rare when dealing with bandwidth-related ones.Then, we advocate for a model that tries to accurately capture the capabilities of a network,named LastMile model. This model assumes that essentially the congestion happens at the edgesconnecting machines to the wide Internet. It has a natural consequence in a bandwidth predictionalgorithm based on this model. Using datasets described earlier, we prove that this algorithm is ableto predict with an accuracy comparable to best known network prediction algorithm (DistributedMatrix Factorization) available bandwidth between two given nodes. While we were unable toimprove upon DMF algorithm in the field of point-to-point prediction, we show that our algorithmhas a clear advantage coming from its simplicity, i.e. it naturally extends to the network predictionsunder congestion scenario (multiple connections sharing a bandwidth over a single link). We areactually able to show, using PlanetLab datasets, that LastMile prediction is better in such scenarios.In the third chapter, we propose new algorithms for solving the large scale broadcast problem.We assume that the network is modeled by the LastMile model. We show that under thisassumption, we are able to provide algorithms with provable, strong approximation ratios. Takingadvantage of the simplicity and elasticity of the model, we can even extend it, so that it captures theidea of connectivity artifacts, in our case firewalls preventing some nodes to communicate directlybetween each other. In the extended case we are also able to provide approximation algorithmswith provable performance.The chapters 1 to 3 form three successful steps of our program to develop from scratch amathematical network communication model, prove it experimentally, and show that it can beapplied to develop algorithms solving hard problems related to design of communication schemesin networks.In the chapter 4 we show how under different network cost models, using some simplifyingassumptions on the structure of network and queries, one can design very efficient communicationschemes using simple combinatorial techniques. This work is complementary to the previous chapter in the sense that previously when designing communication schemes, we assumed atomicityof connections, i.e. that we have no control over routing of simple connections. In chapter 4 weshow how to solve the problem of an efficient routing of network request, given that we know thetopology of the network. It shows the importance of instantiating the parameters and the structureof the network in the context of designing efficient communication schemes.
10

Complex Vehicle Modeling: A Data Driven Approach

Schoen, Alexander C. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis proposes an artificial neural network (NN) model to predict fuel consumption in heavy vehicles. The model uses predictors derived from vehicle speed, mass, and road grade. These variables are readily available from telematics devices that are becoming an integral part of connected vehicles. The model predictors are aggregated over a fixed distance traveled (i.e., window) instead of fixed time interval. It was found that 1km windows is most appropriate for the vocations studied in this thesis. Two vocations were studied, refuse and delivery trucks. The proposed NN model was compared to two traditional models. The first is a parametric model similar to one found in the literature. The second is a linear regression model that uses the same features developed for the NN model. The confidence level of the models using these three methods were calculated in order to evaluate the models variances. It was found that the NN models produce lower point-wise error. However, the stability of the models are not as high as regression models. In order to improve the variance of the NN models, an ensemble based on the average of 5-fold models was created. Finally, the confidence level of each model is analyzed in order to understand how much error is expected from each model. The mean training error was used to correct the ensemble predictions for five K-Fold models. The ensemble K-fold model predictions are more reliable than the single NN and has lower confidence interval than both the parametric and regression models.

Page generated in 0.0972 seconds