541 |
Modélisation de système synthétique pour la production de biohydrogène / Modeling of synthetic system for the production of biohydrogenFontaine, Nicolas 28 September 2015 (has links)
L'épuisement annoncé dans les prochaines décennies des ressources fossiles qui fournissent actuellement plus de 70% du carburant consommé dans les transports terrestres, aériens et maritimes au niveau mondial, incite à l'identification et le développement de nouvelles sources d'énergies renouvelables. La production de biocarburants issue de l'exploitation de la biomasse représente une des voies de recherche les plus prometteuses. Si la première génération des biocarburants (production à partir de plantes sucrières, de céréales ou d'oléagineux) atteint ses limites (concurrence avec les usages alimentaires, en particulier), la deuxième génération, produite à partir de ressources carbonées non alimentaires (lignocellulosique, mélasse, vinasse...), pourrait prendre le relais, une fois que les procédés de conversion seront suffisamment maîtrisés. À plus long terme, une troisième génération pourrait voir le jour, qui reposerait sur l'exploitation de la biomasse marine (microalgues, en particulier) mais où de nombreux verrous restent toutefois à lever : optimisation des procédés de culture et de récolte, extraction à coût réduit, optimisation des voies métaboliques etc. Il est à retenir que la stratégie nationale de recherche et d'innovation (SNRI) a retenu quatre « domaines clés » pour l'énergie : le nucléaire, le solaire photovoltaïque, les biocarburants de deuxième génération et les énergies marines. Ceux-ci sont complétés, au nom de leur contribution potentielle à la lutte contre le changement climatique, par le stockage du CO2, la conversion de l'énergie (dont les piles à combustible) et l'hydrogène. Le présent projet de recherche s'intéresse à explorer des voies d'amélioration de l'efficacité de la biotransformation de matière organique non alimentaire de nature industrielle en biocarburants de deuxième génération. En particulier, on s'intéressera à deux aspects complémentaires : l'optimisation des organismes microbiens et des voies métaboliques pour l'amélioration du rendement biologique de fabrication de biocarburants ; l'optimisation des procédés de mise en culture des microorganismes et d'extraction des biocarburant. Le projet de thèse consiste à mettre en œuvre les biotechnologies blanches, la biologie de synthèse et le génie des procédés pour la caractérisation de souches bactériennes, de leurs voies métaboliques et de prototypes expérimentaux pour la fabrication de biocarburants, de méthane et d'hydrogène à partir de rejets provenant de l'industrie sucrière de La Réunion, à savoir la mélasse ou la vinasse. Ce projet permettrait d'envisager de nouvelles perspectives de valorisation pour ces déchets industriels et de participer à la construction, à terme, d'une industrie réunionnaise durable des biocarburants et de l'hydrogène. / Hydrogen is a candidate for the next generation fuel with a high energy density and an environment friendly behavior in the energy production phase. Micro-organism based biological production of hydrogen currently suffers low hydrogen production yields because the living cells must sustain different cellular activities other than the hydrogen production to survive. To circumvent this, a team have designed a synthetic cell-free system by combining 13 different enzymes to synthesize hydrogen from cellobiose. This assembly has better yield than microorganism-based systems. We used methods based on differential equations calculations to investigate how the initial conditions and the kinetic parameters of the enzymes influenced the productivity of a such system and, through simulations, to identify those conditions that would optimize hydrogen production starting with cellobiose as substrate. Further, if the kinetic parameters of the component enzymes of such a system are not known, we showed how, using artificial neural network, it is possible to identify alternative models that allow to have an idea of the kinetics of hydrogen production. During our study on the system using cellobiose, other cell-free assemblies were engineered to produce hydrogen from different raw materials. Interested in the reconstruction of synthetic systems, we decided to conceive various tools to help the automation of the assembly and the modelling of these new synthetic networks. This work demonstrates how modeling can help in designing and characterizing cell-free systems in synthetic biology.
|
542 |
Démonstration opto-électronique du concept de calculateur neuromorphique par Reservoir Computing / demonstration of optoelectronic concept of neuromorphic computer by reservoir computingMartinenghi, Romain 16 December 2013 (has links)
Le Reservoir Computing (RC) est un paradigme s’inspirant du cerveau humain, apparu récemment au début des années2000. Il s'agit d'un calculateur neuromorphique habituellement décomposé en trois parties dont la plus importanteappelée "réservoir" est très proche d'un réseau de neurones récurrent. Il se démarque des autres réseaux de neuronesartificiels notamment grâce aux traditionnelles phases d'apprentissage et d’entraînement qui ne sont plus appliquées surla totalité du réseau de neurones mais uniquement sur la lecture du réservoir, ce qui simplifie le fonctionnement etfacilite une réalisation physique. C'est précisément dans ce contexte qu’ont été réalisés les travaux de recherche de cettethèse, durant laquelle nous avons réalisé une première implémentation physique opto-électronique de système RC.Notre approche des systèmes physiques RC repose sur l'utilisation de dynamiques non-linéaires à retards multiples dansl'objectif de reproduire le comportement complexe d'un réservoir. L'utilisation d'un système dynamique purementtemporel pour reproduire la dimension spatio-temporelle d'un réseau de neurones traditionnel, nécessite une mise enforme particulière des signaux d'entrée et de sortie, appelée multiplexage temporel ou encore étape de masquage. Troisannées auront été nécessaires pour étudier et construire expérimentalement nos démonstrateurs physiques basés sur desdynamiques non-linéaires à retards multiples opto-électroniques, en longueur d'onde et en intensité. La validationexpérimentale de nos systèmes RC a été réalisée en utilisant deux tests de calcul standards. Le test NARMA10 (test deprédiction de séries temporelles) et la reconnaissance vocale de chiffres prononcés (test de classification de données) ontpermis de quantifier la puissance de calcul de nos systèmes RC et d'atteindre pour certaines configurations l'état del'art. / Reservoir Computing (RC) is a currently emerging new brain-inspired computational paradigm, which appeared in theearly 2000s. It is similar to conventional recurrent neural network (RNN) computing concepts, exhibiting essentiallythree parts: (i) an input layer to inject the information in the computing system; (ii) a central computational layercalled the Reservoir; (iii) and an output layer which is extracting the computed result though a so-called Read-Outprocedure, the latter being determined after a learning and training step. The main originality compared to RNNconsists in the last part, which is the only one concerned by the training step, the input layer and the Reservoir beingoriginally randomly determined and fixed. This specificity brings attractive features to RC compared to RNN, in termsof simplification, efficiency, rapidity, and feasibility of the learning, as well as in terms of dedicated hardwareimplementation of the RC scheme. This thesis is indeed concerned by one of the first a hardware implementation of RC,moreover with an optoelectronic architecture.Our approach to physical RC implementation is based on the use of a sepcial class of complex system for the Reservoir,a nonlinear delay dynamics involving multiple delayed feedback paths. The Reservoir appears thus as a spatio-temporalemulation of a purely temporal dynamics, the delay dynamics. Specific design of the input and output layer are shownto be possible, e.g. through time division multiplexing techniques, and amplitude modulation for the realization of aninput mask to address the virtual nodes in the delay dynamics. Two optoelectronic setups are explored, one involving awavelength nonlinear dynamics with a tunable laser, and another one involving an intensity nonlinear dynamics with anintegrated optics Mach-Zehnder modulator. Experimental validation of the computational efficiency is performedthrough two standard benchmark tasks: the NARMA10 test (prediction task), and a spoken digit recognition test(classification task), the latter showing results very close to state of the art performances, even compared with purenumerical simulation approaches.
|
543 |
Artificial Neural Networks And Artificial Intelligence Paradigms In Damage Assessment Of Steel Railway BridgesBarai, Sudhirkumar V 04 1900 (has links) (PDF)
No description available.
|
544 |
How accuracy of estimated glottal flow waveforms affects spoofed speech detection performanceDeivard, Johannes January 2020 (has links)
In the domain of automatic speaker verification, one of the challenges is to keep the malevolent people out of the system. One way to do this is to create algorithms that are supposed to detect spoofed speech. There are several types of spoofed speech and several ways to detect them, one of which is to look at the glottal flow waveform (GFW) of a speech signal. This waveform is often estimated using glottal inverse filtering (GIF), since, in order to create the ground truth GFW, special invasive equipment is required. To the author’s knowledge, no research has been done where the correlation of GFW accuracy and spoofed speech detection (SSD) performance is investigated. This thesis tries to find out if the aforementioned correlation exists or not. First, the performance of different GIF methods is evaluated, then simple SSD machine learning (ML) models are trained and evaluated based on their macro average precision. The ML models use different datasets composed of parametrized GFWs estimated with the GIF methods from the previous step. Results from the previous tasks are then combined in order to spot any correlations. The evaluations of the different methods showed that they created GFWs of varying accuracy. The different machine learning models also showed varying performance depending on what type of dataset that was being used. However, when combining the results, no obvious correlations between GFW accuracy and SSD performance were detected. This suggests that the overall accuracy of a GFW is not a substantial factor in the performance of machine learning-based SSD algorithms.
|
545 |
Prognostic Health Management Systems for More Electric Aircraft ApplicationsDemus, Justin Cole 09 September 2021 (has links)
No description available.
|
546 |
Data-driven Uncertainty Analysis in Neural Networks with Applications to Manufacturing Process MonitoringBin Zhang (11073474) 12 August 2021 (has links)
<p>Artificial
neural networks, including deep neural networks, play a central role in data-driven
science due to their superior learning capacity and adaptability to different
tasks and data structures. However, although quantitative uncertainty analysis
is essential for training and deploying reliable data-driven models, the
uncertainties in neural networks are often overlooked or underestimated in many
studies, mainly due to the lack of a high-fidelity and computationally
efficient uncertainty quantification approach. In this work, a novel
uncertainty analysis scheme is developed. The Gaussian mixture model is used to
characterize the probability distributions of uncertainties in arbitrary forms,
which yields higher fidelity than the presumed distribution forms, like
Gaussian, when the underlying uncertainty is multimodal, and is more compact
and efficient than large-scale Monte Carlo sampling. The fidelity of the
Gaussian mixture is refined through adaptive scheduling of the width of each
Gaussian component based on the active assessment of the factors that could
deteriorate the uncertainty representation quality, such as the nonlinearity of
activation functions in the neural network. </p>
<p>Following
this idea, an adaptive Gaussian mixture scheme of nonlinear uncertainty
propagation is proposed to effectively propagate the probability distributions of
uncertainties through layers in deep neural networks or through time in
recurrent neural networks. An adaptive Gaussian mixture filter (AGMF) is then designed
based on this uncertainty propagation scheme. By approximating the dynamics of
a highly nonlinear system with a feedforward neural network, the adaptive Gaussian
mixture refinement is applied at both the state prediction and Bayesian update
steps to closely track the distribution of unmeasurable states. As a result,
this new AGMF exhibits state-of-the-art accuracy with a reasonable
computational cost on highly nonlinear state estimation problems subject to
high magnitudes of uncertainties. Next, a probabilistic neural network with
Gaussian-mixture-distributed parameters (GM-PNN) is developed. The adaptive
Gaussian mixture scheme is extended to refine intermediate layer states and
ensure the fidelity of both linear and nonlinear transformations within the
network so that the predictive distribution of output target can be inferred
directly without sampling or approximation of integration. The derivatives of the
loss function with respect to all the probabilistic parameters in this network
are derived explicitly, and therefore, the GM-PNN can be easily trained with
any backpropagation method to address practical data-driven problems subject to
uncertainties.</p>
<p>The
GM-PNN is applied to two data-driven condition monitoring schemes of
manufacturing processes. For tool wear monitoring in the turning process, a
systematic feature normalization and selection scheme is proposed for the
engineering of optimal feature sets extracted from sensor signals. The
predictive tool wear models are established using two methods, one is a type-2
fuzzy network for interval-type uncertainty quantification and the other is the
GM-PNN for probabilistic uncertainty quantification. For porosity monitoring in
laser additive manufacturing processes, convolutional neural network (CNN) is
used to directly learn patterns from melt-pool patterns to predict porosity.
The classical CNN models without consideration of uncertainty are compared with
the CNN models in which GM-PNN is embedded as an uncertainty quantification
module. For both monitoring schemes, experimental results show that the GM-PNN
not only achieves higher prediction accuracies of process conditions than the
classical models but also provides more effective uncertainty quantification to
facilitate the process-level decision-making in the manufacturing environment.</p><p>Based
on the developed uncertainty analysis methods and their proven successes in
practical applications, some directions for future studies are suggested.
Closed-loop control systems may be synthesized by combining the AGMF with
data-driven controller design. The AGMF can also be extended from a state estimator
to the parameter estimation problems in data-driven models. In addition, the
GM-PNN scheme may be expanded to directly build more complicated models like
convolutional or recurrent neural networks.</p>
|
547 |
Towards a Data-Driven Approach to Ground-Fault Location in Distribution Power System using Artificial Neural NetworkDupuis, Antoine January 2021 (has links)
Motivated by the need for less polluting energy production, the recent increase in renewable electricity production is reshaping classical power systems. Initially unidirectional and constant power flow becomes multi-directional and dynamic. As one of the many consequences, classical power system fault location methods might become outdated.To this extent, the development of new methods as well as improvement of already existing methods is of great interest. Additionally, robust and fast means of fault location strengthen power system reliability by improving recovery time. Since most of the faults occur at the distribution level, a study of the main fault location methods in distribution power systems is first conducted. Relevant information about their respective advantages and drawbacks put into light the need to improve classical fault location methods or to develop new methods. The main objective of the thesis is to develop a prototype data-driven ground fault location method that aims to improve the robustness and accuracy offault location in the power system, as well as offer new solutions for fault location. An 11-bus 20 kV distribution power system with distributed generation is modeled to test the method. As a requirement for data-driven methods, the dataset is provided through simulation where time-domain three-phase voltages at the system substation during fault are generated. This data is then processed using dyadic discrete wavelet transform, a powerful signal processing method, to extract useful information of the signal, after what relevant features are found from the wavelet coefficients. To predict the location ofthe fault, neural networks are trained to find potential correlations between computed features and the distance of the fault from the substation. After testing and comparing different combinations of neural networks, results are analyzed, and eventually, challenges and potential improvements for further development and application of the method are introduced.
|
548 |
Návrh algoritmů pro neuronové sítě řídicí síťový prvek / Design of algorithms for neural networks controlling a network elementStískal, Břetislav January 2008 (has links)
This diploma thesis is devided into theoretic and practice parts. Theoretic part contains basic information about history and development of Artificial Neural Networks (ANN) from last century till present. Prove of the theoretic section is discussed in the practice part, for example learning, training each types of topology of artificial neural networks on some specifics works. Simulation of this networks and then describing results. Aim of thesis is simulation of the active networks element controlling by artificial neural networks. It means learning, training and simulation of designed neural network. This section contains algorithm of ports switching by address with Hopfield's networks, which used solution of typical Trade Salesman Problem (TSP). Next point is to sketch problems with optimalization and their solutions. Hopfield's topology is compared with Recurrent topology of neural networks (Elman's and Layer Recurrent's topology) their main differents, their advantages and disadvantages and supposed their solution of optimalization in controlling of network's switch. From thesis experience is introduced solution with controll function of ANN in active networks elements in the future.
|
549 |
Síťový prvek s pokročilým řízením / Network Element with Advanced ControlZedníček, Petr January 2010 (has links)
The diploma thesis deal with finding and testing neural networks, whose characteristics and parameters suitable for the active management of network element. Solves optimization task priority switching of data units from input to output. Work is focused largely on the use of Hopfield and Kohonen networks and their optimization. Result of this work are two models. The first theory is solved in Matlab, where each comparing the theoretical results of neural networks. The second model is a realistic model of the active element designed in Simulink
|
550 |
Rozpoznávání a klasifikace emocí na základě analýzy řeči / Emotional State Recognition and Classification Based on Speech Signal AnalysisČerný, Lukáš January 2010 (has links)
The diploma thesis focuses on classification of emotions. Thesis deals about parameterization of sounds files by suprasegment and segment methods with regard for next used of these methods. Berlin database is used. This database includes many of sounds records with emotions. Parameterization creates files, which are divided to two parts. First part is used for training and second part is used for testing. Point of interest is self-organization network. Thesis includes Matlab´s program which can be used for parameterization of any database. Data are classified by self-organization network after parameterization. Results of hits rates are presented at the end of this diploma thesis.
|
Page generated in 0.0653 seconds