• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 31
  • 31
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Accuracy Improvement of Predictive Neural Networks for Managing Energy in Solar Powered Wireless Sensor Nodes

Al_Omary, Murad 20 December 2019 (has links)
Das drahtlose Sensornetzwerk (WSN) ist eine Technologie, die Umgebungsbedingungen oder physikalische Parameter misst, weiterleitet und per Fernüberwachung zur Verfügung stellt. Normalerweise werden die Sensorknoten, die diese Netzwerke bilden, von Batterien gespeist. Diese sollen aus verschiedenen Gründen nicht mehr verwendet werden, sondern es wird auf eine eigenständige Stromversorgung gesetzt. Dies soll den aufwendigen Austausch und die Wartung minimieren. Energy Harvesting kann mit den Knoten verwendet werden, um die Batterien zu unterstützen und die Lebensdauer der Netzwerke zu verlängern. Aufgrund der hohen Leistungsdichte der Solarenergie im Vergleich zu verschiedenen anderen Umweltenergien sind Solarzellen die am häufigsten eingesetzten Wandler, allerdings stellt die schwankende und intermittierende Natur der Solarenergie eine Herausforderung dar, einen funktionalen und zuverlässigen Sensorknoten zu versorgen. Um den Sensorknoten effektiv zu betreiben, sollte sein Energieverbrauch sinnvoll gesteuert werden. Ein interessanter Ansatz zu diesem Zweck ist die Steuerung der Aktivitäten des Knotens in Abhängigkeit von der zukünftig verfügbaren Energie. Dies erfordert eine Vorhersage der wandelbaren Sonnenenergie für die kommenden Betriebszeiten einschließlich der freien Zeiten der Sonne. Einige Vorhersagealgorithmen wurden mit stochastischen und statistischen Prinzipien sowie mit Methoden der künstlichen Intelligenz (KI) erstellt. Durch diese Algorithmen bleibt ein erheblicher Vorhersagefehler von 5-70%, der den zuverlässigen Betrieb der Knoten beeinträchtigt. Beispielsweise verwenden die stochastischen Methoden einen diskreten Energiezustand, der meist nicht zu den tatsächlichen Messwerten passt. Die statistischen Methoden verwenden einen Gewichtungsfaktor für die zuvor registrierten Messwerte. Daher sind sie nur geeignet, um Energieprofile bei konstanten Wetterbedingungen vorherzusagen. KI-Methoden erfordern große Beobachtungen im Trainingsprozess, die den benötigten Speicherplatz erhöhen. Dementsprechend ist die Leistung hinsichtlich der Vorhersagegenauigkeit dieser Algorithmen nicht ausreichend. In dieser Arbeit wird ein Vorhersagealgorithmus mit einem neuronalen Netzwerk entwickelt und eingebunden in einen Mikrocontroller, um die Verwaltung des Energieverbrauchs von solarzellengesteuerten Sensorknoten zu optimieren. Das verwendete neuronale Netzwerk wurde mit einer Kombination aus meteorologischen und statistischen Eingangsparametern realisiert. Dies hat zum Ziel, die erforderlichen Designkriterien für Sensorknoten zu erfüllen und eine Leistung zu erreichen, die in ihrer Genauigkeit die Leistung der oben genannten traditionellen Algorithmen übersteigt. Die Vorhersagegenauigkeit die durch den Korrelationskoeffizienten repräsentiert wird, wurde für das entwickelte neuronale Netzwerk auf 0,992 bestimmt. Das genaueste traditionelle Netzwerk erreicht nur einen Wert von 0,963. Das entwickelte neuronale Netzwerk wurde in einen Prototyp eines Sensorknotens integriert, um die Betriebszustände oder -modi über einen Simulationszeitraum von einer Woche anzupassen. Während dieser Zeit hat der Sensorknoten 6 Stunden zusätzlich im Normalbetrieb gearbeitet. Dies trug dazu bei, eine effektive Nutzung der verfügbaren Energie um ca. 3,6% besser zu erfüllen als das genaueste traditionelle Netz. Dadurch wird eine längere Lebensdauer und Zuverlässigkeit des Sensorknotens erreicht. / Wireless Sensor Network (WSN) is a technology that measures an environmental or physical parameters in order to use them by decision makers with a possibility of remote monitoring. Normally, sensor nodes that compose these networks are powered by batteries which are no longer feasible, especially when they used as fixed and standalone power source. This is due to the costly replacement and maintenance. Ambient energy harvesting systems can be used with these nodes to support the batteries and to prolong the lifetime of these networks. Due to the high power density of solar energy in comparison with different environmental energies, solar cells are the most utilized harvesting systems. Although that, the fluctuating and intermittent nature of solar energy causes a real challenge against fulfilling a functional and reliable sensor node. In order to operate the sensor node effectively, its energy consumption should be well managed. One interesting approach for this purpose is to control the future node’s activities according to the prospective energy available. This requires performing a prior prediction of the harvestable solar energy for the upcoming operation periods including the sun’s free times. A few prediction algorithms have been created using stochastic and statistical principles as well as artificial intelligence (AI) methods. A considerable prediction error of 5-70% is realized by these algorithms affecting the reliable operation of the nodes. For example, the stochastic ones use a discrete energy states which are mostly do not fit the actual readings. The statistical methods use a weighting factors for the previous registered readings. Thus, they are convenient only to predict energy profiles under consistent weather conditions. AI methods require large observations to be used in the training process which increase the memory space needed. Accordingly, the performance concerning the prediction accuracy of these algorithms is not sufficient. In this thesis, a prediction algorithm using a neural network has been proposed and implemented in a microcontroller for managing energy consumption of solar cell driven sensor nodes. The utilized neural network has been developed using a combination of meteorological and statistical input parameters. This is to meet a required design criteria for the sensor nodes and to fulfill a performance exceeds in its accuracy the performance of aforementioned traditional algorithms. The prediction accuracy represented by the correlation coefficient has been registered for the developed neural network to be 0.992, which increases the most accurate traditional network which has a value 0.963. The developed neural network has been embedded into a sensor node prototype to adjust the operating states or modes over a simulation period of one week. During this period, the sensor node has worked 6 hours more towards normal operation mode. This in its role helped to fulfill an effective use of available energy approximately 3.6% better than the most accurate traditional network. Thus, longer lifetime and more reliable sensor node.
22

AI inom radiologi, nuläge och framtid / AI in radiology, now and the future

Täreby, Linus, Bertilsson, William January 2023 (has links)
Denna uppsats presenterar resultaten av en kvalitativ undersökning som syftar till att ge en djupare förståelse för användningen av AI inom radiologi, dess framtida påverkan på yrket och hur det används idag. Genom att genomföra tre intervjuer med personer som arbetar inom radiologi, har datainsamlingen fokuserat på att identifiera de positiva och negativa aspekterna av AI i radiologi, samt dess potentiella konsekvenser på yrket. Resultaten visar på en allmän acceptans för AI inom radiologi och dess förmåga att förbättra diagnostiska processer och effektivisera arbetet. Samtidigt finns det en viss oro för att AI kan ersätta människor och minska behovet av mänskliga bedömningar. Denna uppsats ger en grundläggande förståelse för hur AI används inom radiologi och dess möjliga framtida konsekvenser. / This essay presents the results of a qualitative study aimed at gaining a deeper understanding of the use of artificial intelligence (AI) in radiology, its potential impact on the profession and how it’s used today. By conducting three interviews with individuals working in radiology, data collection focused on identifying the positive and negative aspects of AI in radiology, as well as its potential consequences on the profession. The results show a general acceptance of AI in radiology and its ability to improve diagnostic processes and streamline work. At the same time, there is a certain concern that AI may replace humans and reduce the need for human judgments. This report provides a basic understanding of how AI is used in radiology and its possible future consequences.
23

Impact du stress hydrique sur les émissions d'isoprène de Quercus pubescens Willd / Water stress impact on isoprene emission from Quercus pubescens Willd.

Genard-Zielinski, Anne-Cyrielle 23 June 2014 (has links)
Les Composés Organiques Volatils biogènes (COVB) sont des molécules issues du métabolisme secondaire des végétaux, dont l'émission peut être modulée par les conditions environnementales. Parmi ces composés, l'isoprène a été très étudié du fait des flux d'émission important et de son implication dans la photochimie troposphérique. Cependant, les mécanismes d'action des facteurs environnementaux sont encore mal connus, et notamment celui de l'impact du stress hydrique. Dans le contexte de changements climatiques, ce type de stress va particulièrement impacter la région méditerranéenne.Nous avons étudié l'impact du stress hydrique sur les émissions d'isoprène de Quercus pubescens Willd. Cette espèce, très présente dans cette région, serait la seconde source d'isoprène en Europe.Deux étude ont été menées.La première, effectuée en pépinière, a consisté à appliquer un stress hydrique modéré et sévère d'avril à octobre. Une augmentation des émissions d'isoprène des arbres modérément stressés a été observée alors qu'il n'y a eu aucune modification des émissions pour les arbres très stressés.La seconde a consisté à faire un suivi saisonnier du stress hydrique au sein d'une chênaie pubescente. Un stress hydrique amplifié a été appliqué par un système d'exclusion de pluie, permettant de diminuer la quantité de pluie de 30%. Nous avons observé que le stress hydrique amplifié augmentait les facteurs d'émission d'isoprène des arbres.Cette base de données a permis le développement, par Réseau de Neurones Artificiels (RNA), d'un algorithme d'émission d'isoprène. Nous avons ainsi mis en évidence l'impact prédominant du contenu en eau du sol sur les émissions d'isoprène. / Biogenic Volatile Organic Compounds (BVOC) are plants secondary-metabolism-molecules. Their emissions are modulated by environmental conditions. Among these compounds, isoprene has been particularly studied due to its intense emission fluxes as well as its major contribution to tropospheric photochemistry. However, the impacts of environmental constraints on isoprene emission are still not yet well known. In particular, water stress impact is still a contradictory issue. In a world facing multiple climatic changes, models expect this kind of stress to hit Mediterranean area.This work focused on the impact of water stress on Quercus pubescens Willd. isoprene emissions. This species, widely spread in this area, is the second isoprene emitter in Europe.Two types of study were used.First, during an experimental carried out in a nursery, Q. pubescens saplings were grown under a moderate and severe water stress from April to October. This experimentation highlighted an increase of isoprene emissions for mid-stressed trees, while no emission changes were observed for the highly stressed trees.Secondly, an experimentation was conducted on a pubescent oak forest with trees acclimated to long lasting stress periods. We followed, during a whole season, the impact, on isoprene emissions, of a water stress created by artificially reducing 30% of the rains by means of a specific deploying roof. Isoprene emission factors were observed to increase under water stress.The database thus obtained was used in an Artificial Neural Network (ANN) to develop an appropriate isoprene emission algorithm. We underlined the predominant impact of soil water content on isoprene emissions.
24

A Neural Network Approach To Rotorcraft Parameter Estimation

Kumar, Rajan 04 1900 (has links)
The present work focuses on the system identification method of aerodynamic parameter estimation which is used to calculate the stability and control derivatives required for aircraft flight mechanics. A new rotorcraft parameter estimation technique is proposed which uses a type of artificial neural network (ANN) called radial basis function network (RBFN). Rotorcraft parameter estimation using ANN is an unexplored research topic and the earlier works in this area have used the output error, equation error and filter error methods which are conventional parameter estimation methods. However, the conventional methods require an accurate non-linear rotorcraft simulation model which is not required by the ANN based method. The application of RBFN overcomes the drawbacks of multilayer perceptron (MLP) based delta method of parameter estimation and gives satisfactory results at either end of the ordered set of estimates. This makes the RBFN based delta method for parameter estimation suitable for rotorcraft studies, as both transition and high speed flight regime characteristics can be studied. The RBFN based delta method for parameter estimation is used for computation of aerodynamic parameters from both simulated and real time flight data. The simulated data is generated from an 8-DoF non-linear simulation model based on the Level-1 criteria of rotorcraft simulation modeling. The generated simulated data is used for computation of the quasi-steady and the time-variant stability and control parameters for different flight conditions using the RBFN based delta method. The performance of RBFN based delta method is also analyzed in the presence of state and measurement noise as well as outliers. The established methodology is then applied to compute parameters directly from real time flight test data for a BO 105 S123 helicopter obtained from DLR (German Aerospace Center). The parameters identified using the RBFN based delta method are compared with the identified values for the BO 105 helicopter from published literature which have used conventional parameter estimation techniques for parameter estimation using a 6-DoF and a 9-DoF rotorcraft simulation model. Finally, the estimated parameters are verified from the flight data generated by a frequency sweep pilot control input for assessing the predictive capability of the RBFN based delta method. Since the approach directly computes the parameters from flight data, it can be used for a reliable description of the higher frequency range, which is needed for high bandwidth flight control and in-flight simulation.
25

Flutter Susceptibility Assessment of Airplanes in Sub-critical Regime using Ameliorated Flutter Margin and Neural Network Based Methods

Kumar, Brijesh January 2014 (has links) (PDF)
As flight flutter testing on an airplane progresses to high dynamic pressures and high Mach number region, it becomes very difficult for engineers to predict the level of the remaining stability in a flutter-prone mode and flutter-prone mechanism when response data is infested with uncertainty. Uncertainty and ensuing scatter in modal data trends always leads to diminished confidence amidst the possibility of sudden decrease in modal damping of a flutter-prone mode. Since the safety of the instrumented prototype and the crew cannot be compromised, a large number of test-points are planned, which eventually results in increased development time and associated costs. There has been a constant demand from the flight test community to improve understanding of the con-ventional methods and develop new methods that could enable ground-station engineers to make better decision with regard to flutter susceptibility of structural components on the airframe. An extensive literature survey has been done for many years to take due cognizance of the ground realities, historical developments, and the state of the art. Besides, discussion on the results of a survey carried on occurrences of flutter among general aviation airplanes has been provided at the very outset. Data for research comprises results of Computational Aero elasticity Analysis (CAA) and limited Flight Flutter Tests (FFTs) on two slightly different structural designs of the airframe of a supersonic fixed-wing airplane. Detail discussion has been provided with regard to the nature of the data, the certification requirements for an airplane to be flutter-free in the flight-envelope, and the adopted process of flight flutter testing. Four flutter-prone modes - with two modes forming a symmetric bending-pitching flutter mechanism and the other two forming an anti-symmetric bending-pitching mechanism have been identified based on the analysis of computational data. CAA and FFT raw data of these low frequency flutter modes have been provided followed by discussion on its quality and flutter susceptibility of the critical mechanisms. Certain flight-conditions, at constant altitude line and constant Mach number lines, have been chosen on the basis of availability of FFT data near the same flight conditions. Modal damping is often a highly non-linear function of airspeed and scatter in such trends of modal damping can be very misleading. Flutter margin (FM) parameter, a measure of the remaining stability in a binary flutter mechanism, exhibits smooth and gradual variation with dynamic pressure. First, this thesis brings out the established knowledge of the flutter margin method and marks the continuing knowledge-gaps, especially about the applicable form of the flutter margin prediction equation in transonic region. Further theoretical developments revealed that the coefficients of this equation are flight condition depended to a large extent and the equation should be only used in small ‘windows’ of the flight-envelope by making the real-time flutter susceptibility assessment ‘progressive’ in nature. Firstly, it is brought out that lift curve slope should not be treated as a constant while using the prediction equation at constant altitudes on an airplane capable of transonic flight. Secondly, it was realized that the effect of shift in aerodynamic canter must be considered as it causes a ‘transonic-hump’. Since the quadratic form of flutter margin prediction equation developed 47 years ago, does not provide a valid explanation in that region, a general equation has been derived. Furthermore, flight test data from only supersonic region must be used for making acceptable predictions in supersonic region. The ‘ameliorated’ flutter margin prediction equation too provides bad predictions in transonic region. This has been attributed to the non-validity of quasi-steady approximation of aerodynamic loads and other additional non-linear effects. Although the equation with effect of changing lift curve slope provides inconsistent predictions inside and near the region of transonic-hump, the errors have been acceptable in most cases. No consistent congruency was discovered to some earlier reports that FM trend is mostly parabolic in subsonic region and linear in supersonic region. It was also found that the large scatter in modal frequencies of the constituent modes can lead to scatter in flutter margin values which can render flutter margin method as ineffective as the polynomial fitting of modal damping ratios. If the modal parameters at a repeated test-point exhibit Gaussian spread, the distribution in FM is non-Gaussian but close to gamma-type. Fifteen uncertainty factors that cause scatter in modal data during FFT and factor that cause modelling error in a computational model have been enumerated. Since scatter in modal data is ineluctable, it was realized that a new predictive tool is needed in which the probable uncertainty can be incorporated proactively. Given the recent shortcomings of NASA’s flutter meter, the neural network based approach was recognized as the most suitable one. MLP neural network have been used successfully in such scenarios for function approximation through input-output mapping provided the domains of the two are remain finite. A neural network requires ample data for good learning and some relevant testing data for the evaluation of its performance. It was established that additional data can be generated by perturbing modal mass matrix in the computational model within a symmetric bound. Since FFT is essentially an experimental process, it was realized that such bound should be obtained from experimental data only, as the full effects of uncertainty factors manifest only during flight tests. The ‘validation FFT program’, a flight test procedure for establishing such bound from repeated tests at five diverse test-points in safe region has been devised after careful evaluation of guide-lines and international practice. A simple statistical methodology has been devised to calculate the bound-of-uncertainty when modal parameters from repeated tests show Gaussian distribution. Since no repeated tests were conducted on the applicable airframe, a hypothetical example with compatible data was considered to explain the procedure. Some key assumptions have been made and discussion regarding their plausibility has been provided. Since no updated computational model was made available, the next best option of causing random variation in nominal values of CAA data was exercised to generate additional data for arriving at the final form of neural network architecture and making predictions of damping ratios and FM values. The problem of progressive flutter susceptibility assessment was formulated such that the CAA data from four previous test-points were considered as input vectors and CAA data from the next test-point was the corresponding output. General heuristics for an optimal learning performance has been developed. Although, obtaining an optimal set of network parameters has been relatively easy, there was no single set of network parameters that would lead to consistently good predictions. Therefore some fine-tuning, of network parameters about the optimal set was often needed to achieve good generalization. It was found that data from the four already flown test-points tend to dominate network prediction and the availability of flight-test data from these previous test-points within the bound about nominal is absolutely important for good predictions. The performance improves when all the five test-points are closer. If above requirements were met, the predictive performance of neural network has been much more consistent in flutter margin values than in modal damping ratios. A new algorithm for training MLP network, called Particle Swarm Optimization (PSO) has also been tested. It was found that the gradient descent based algorithm is much more suitable than PSO in terms of training time, predictive performance, and real-time applicability. In summary, the main intellectual contributions of this thesis are as follows: • Realization of that the fact that secondary causes lead incidences of flutter on airplanes than primary causes. • Completion of theoretical understanding of data-based flutter margin method and flutter margin prediction equation for all ranges of flight Mach number, including the transonic region. • Vindication of the fact that including lift-curve slope in the flutter margin pre-diction equation leads to improved predictions of flutter margins in subsonic and supersonic regions and progressive flutter susceptibility assessment is the best way of reaping benefits of data-based methods. • Explanation of a plausible recommended process for evaluation of uncertainty in modal damping and flutter margin parameter. • Realization of the fact that a MLP neural network, which treats a flutter mechanism as a stochastic non-linear system, is a indeed a promising approach for real-time flutter susceptibility assessment.
26

Metamodel-Based Multidisciplinary Design Optimization of Automotive Structures

Ryberg, Ann-Britt January 2017 (has links)
Multidisciplinary design optimization (MDO) can be used in computer aided engineering (CAE) to efficiently improve and balance performance of automotive structures. However, large-scale MDO is not yet generally integrated within automotive product development due to several challenges, of which excessive computing times is the most important one. In this thesis, a metamodel-based MDO process that fits normal company organizations and CAE-based development processes is presented. The introduction of global metamodels offers means to increase computational efficiency and distribute work without implementing complicated multi-level MDO methods. The presented MDO process is proven to be efficient for thickness optimization studies with the objective to minimize mass. It can also be used for spot weld optimization if the models are prepared correctly. A comparison of different methods reveals that topology optimization, which requires less model preparation and computational effort, is an alternative if load cases involving simulations of linear systems are judged to be of major importance. A technical challenge when performing metamodel-based design optimization is lack of accuracy for metamodels representing complex responses including discontinuities, which are common in for example crashworthiness applications. The decision boundary from a support vector machine (SVM) can be used to identify the border between different types of deformation behaviour. In this thesis, this information is used to improve the accuracy of feedforward neural network metamodels. Three different approaches are tested; to split the design space and fit separate metamodels for the different regions, to add estimated guiding samples to the fitting set along the boundary before a global metamodel is fitted, and to use a special SVM-based sequential sampling method. Substantial improvements in accuracy are observed, and it is found that implementing SVM-based sequential sampling and estimated guiding samples can result in successful optimization studies for cases where more conventional methods fail.
27

Artificial Neural Networks And Artificial Intelligence Paradigms In Damage Assessment Of Steel Railway Bridges

Barai, Sudhirkumar V 04 1900 (has links) (PDF)
No description available.
28

Development of deterioration diagnostic methods for secondary batteries used in industrial applications by means of artificial intelligence / 人工知能を用いた産業用二次電池の劣化診断法開発 / ジンコウ チノウ オ モチイタ サンギョウヨウ ニジ デンチ ノ レッカ シンダンホウ カイハツ

Minella Bezha 22 March 2020 (has links)
蓄電池は携帯機器,電気自動車をはじめ,自然エネルギー有効利用に至るまで広範囲に利用され,その重要性はますます高まっている。これら機器の使用時間や特性は蓄電池の特性に大きく依存することから,電池自体の特性改善に加え,劣化を診断してより効率的に電池を運用することが求められている。本論文は,非線形情報処理を得意とする人工知能を用いた2次電池の劣化診断法を開発し,エネルギーの有効利用に資する技術を確立した。機器動作時の電池電圧・電流波形と電池劣化特性との関連性を,人工知能を用い学習することにより,機器稼働時に電池の劣化を診断することができる。なお,この関連性は非線形で複雑であるが,非線形分析を得意とする人工知能は劣化診断に適している。学習には時間を要するものの,診断は短時間になし得ることから,提案法は稼働時劣化診断に適している。本論文では,この特徴を生かし,電池の等価回路(ECM)を導出し,充電率(SOC),容量維持率(SOH)を推定している。また,本論文では現在産業応用分野で用いられている,リチウムイオン電池,ニッケル水素電池,鉛蓄電池を対象とし,提案法はあらゆる電池使用機器に応用可能である。また,提案法を電池状態監視装置(BMU)や,マイコンなどを用いた組み込みシステムに応用可能とし,実証している。以上のことから,本論文は,新たな蓄電池の劣化診断法の確立し,その有効性を確認している。 / The importance of rechargeable batteries nowadays is increasing from the portable electronic devices and solar energy industry up to the development of new EV models. The rechargeable batteries have a crucial role in the storage system, mostly in mobile applications and transportation, because the period of its usage and the flexibility of the function are determined by the battery. Due to the black box approach of the ANN it is possible to connect the complex physical phenomenon with a specific physical meaning expressed with a nonlinear logic between inputs and output. Using specific input data to relate with the desired output, makes possible to create a pattern connection with input and output. This ability helps to estimate in real time the desired outputs, behaviors, phenomes and at the same time it can be used as a real time diagnosis method. / 博士(工学) / Doctor of Philosophy in Engineering / 同志社大学 / Doshisha University
29

Réalisation d'un réseau de neurones "SOM" sur une architecture matérielle adaptable et extensible à base de réseaux sur puce "NoC" / Neural Network Implementation on an Adaptable and Scalable Hardware Architecture based-on Network-on-Chip

Abadi, Mehdi 07 July 2018 (has links)
Depuis son introduction en 1982, la carte auto-organisatrice de Kohonen (Self-Organizing Map : SOM) a prouvé ses capacités de classification et visualisation des données multidimensionnelles dans différents domaines d’application. Les implémentations matérielles de la carte SOM, en exploitant le taux de parallélisme élevé de l’algorithme de Kohonen, permettent d’augmenter les performances de ce modèle neuronal souvent au détriment de la flexibilité. D’autre part, la flexibilité est offerte par les implémentations logicielles qui quant à elles ne sont pas adaptées pour les applications temps réel à cause de leurs performances temporelles limitées. Dans cette thèse nous avons proposé une architecture matérielle distribuée, adaptable, flexible et extensible de la carte SOM à base de NoC dédiée pour une implantation matérielle sur FPGA. A base de cette approche, nous avons également proposé une architecture matérielle innovante d’une carte SOM à structure croissante au cours de la phase d’apprentissage / Since its introduction in 1982, Kohonen’s Self-Organizing Map (SOM) showed its ability to classify and visualize multidimensional data in various application fields. Hardware implementations of SOM, by exploiting the inherent parallelism of the Kohonen algorithm, allow to increase the overall performances of this neuronal network, often at the expense of the flexibility. On the other hand, the flexibility is offered by software implementations which on their side are not suited for real-time applications due to the limited time performances. In this thesis we proposed a distributed, adaptable, flexible and scalable hardware architecture of SOM based on Network-on-Chip (NoC) designed for FPGA implementation. Moreover, based on this approach we also proposed a novel hardware architecture of a growing SOM able to evolve its own structure during the learning phase
30

Geotechnical Site Characterization And Liquefaction Evaluation Using Intelligent Models

Samui, Pijush 02 1900 (has links)
Site characterization is an important task in Geotechnical Engineering. In situ tests based on standard penetration test (SPT), cone penetration test (CPT) and shear wave velocity survey are popular among geotechnical engineers. Site characterization using any of these properties based on finite number of in-situ test data is an imperative task in probabilistic site characterization. These methods have been used to design future soil sampling programs for the site and to specify the soil stratification. It is never possible to know the geotechnical properties at every location beneath an actual site because, in order to do so, one would need to sample and/or test the entire subsurface profile. Therefore, the main objective of site characterization models is to predict the subsurface soil properties with minimum in-situ test data. The prediction of soil property is a difficult task due to the uncertainities. Spatial variability, measurement ‘noise’, measurement and model bias, and statistical error due to limited measurements are the sources of uncertainities. Liquefaction in soil is one of the other major problems in geotechnical earthquake engineering. It is defined as the transformation of a granular material from a solid to a liquefied state as a consequence of increased pore-water pressure and reduced effective stress. The generation of excess pore pressure under undrained loading conditions is a hallmark of all liquefaction phenomena. This phenomena was brought to the attention of engineers more so after Niigata(1964) and Alaska(1964) earthquakes. Liquefaction will cause building settlement or tipping, sand boils, ground cracks, landslides, dam instability, highway embankment failures, or other hazards. Such damages are generally of great concern to public safety and are of economic significance. Site-spefific evaluation of liquefaction susceptibility of sandy and silty soils is a first step in liquefaction hazard assessment. Many methods (intelligent models and simple methods as suggested by Seed and Idriss, 1971) have been suggested to evaluate liquefaction susceptibility based on the large data from the sites where soil has been liquefied / not liquefied. The rapid advance in information processing systems in recent decades directed engineering research towards the development of intelligent models that can model natural phenomena automatically. In intelligent model, a process of training is used to build up a model of the particular system, from which it is hoped to deduce responses of the system for situations that have yet to be observed. Intelligent models learn the input output relationship from the data itself. The quantity and quality of the data govern the performance of intelligent model. The objective of this study is to develop intelligent models [geostatistic, artificial neural network(ANN) and support vector machine(SVM)] to estimate corrected standard penetration test (SPT) value, Nc, in the three dimensional (3D) subsurface of Bangalore. The database consists of 766 boreholes spread over a 220 sq km area, with several SPT N values (uncorrected blow counts) in each of them. There are total 3015 N values in the 3D subsurface of Bangalore. To get the corrected blow counts, Nc, various corrections such as for overburden stress, size of borehole, type of sampler, hammer energy and length of connecting rod have been applied on the raw N values. Using a large database of Nc values in the 3D subsurface of Bangalore, three geostatistical models (simple kriging, ordinary kriging and disjunctive kriging) have been developed. Simple and ordinary kriging produces linear estimator whereas, disjunctive kriging produces nonlinear estimator. The knowledge of the semivariogram of the Nc data is used in the kriging theory to estimate the values at points in the subsurface of Bangalore where field measurements are not available. The capability of disjunctive kriging to be a nonlinear estimator and an estimator of the conditional probability is explored. A cross validation (Q1 and Q2) analysis is also done for the developed simple, ordinary and disjunctive kriging model. The result indicates that the performance of the disjunctive kriging model is better than simple as well as ordinary kriging model. This study also describes two ANN modelling techniques applied to predict Nc data at any point in the 3D subsurface of Bangalore. The first technique uses four layered feed-forward backpropagation (BP) model to approximate the function, Nc=f(x, y, z) where x, y, z are the coordinates of the 3D subsurface of Bangalore. The second technique uses generalized regression neural network (GRNN) that is trained with suitable spread(s) to approximate the function, Nc=f(x, y, z). In this BP model, the transfer function used in first and second hidden layer is tansig and logsig respectively. The logsig transfer function is used in the output layer. The maximum epoch has been set to 30000. A Levenberg-Marquardt algorithm has been used for BP model. The performance of the models obtained using both techniques is assessed in terms of prediction accuracy. BP ANN model outperforms GRNN model and all kriging models. SVM model, which is firmly based on the theory of statistical learning theory, uses regression technique by introducing -insensitive loss function has been also adopted to predict Nc data at any point in 3D subsurface of Bangalore. The SVM implements the structural risk minimization principle (SRMP), which has been shown to be superior to the more traditional empirical risk minimization principle (ERMP) employed by many of the other modelling techniques. The present study also highlights the capability of SVM over the developed geostatistic models (simple kriging, ordinary kriging and disjunctive kriging) and ANN models. Further in this thesis, Liquefaction susceptibility is evaluated from SPT, CPT and Vs data using BP-ANN and SVM. Intelligent models (based on ANN and SVM) are developed for prediction of liquefaction susceptibility using SPT data from the 1999 Chi-Chi earthquake, Taiwan. Two models (MODEL I and MODEL II) are developed. The SPT data from the work of Hwang and Yang (2001) has been used for this purpose. In MODEL I, cyclic stress ratio (CSR) and corrected SPT values (N1)60 have been used for prediction of liquefaction susceptibility. In MODEL II, only peak ground acceleration (PGA) and (N1)60 have been used for prediction of liquefaction susceptibility. Further, the generalization capability of the MODEL II has been examined using different case histories available globally (global SPT data) from the work of Goh (1994). This study also examines the capabilities of ANN and SVM to predict the liquefaction susceptibility of soils from CPT data obtained from the 1999 Chi-Chi earthquake, Taiwan. For determination of liquefaction susceptibility, both ANN and SVM use the classification technique. The CPT data has been taken from the work of Ku et al.(2004). In MODEL I, cone tip resistance (qc) and CSR values have been used for prediction of liquefaction susceptibility (using both ANN and SVM). In MODEL II, only PGA and qc have been used for prediction of liquefaction susceptibility. Further, developed MODEL II has been also applied to different case histories available globally (global CPT data) from the work of Goh (1996). Intelligent models (ANN and SVM) have been also adopted for liquefaction susceptibility prediction based on shear wave velocity (Vs). The Vs data has been collected from the work of Andrus and Stokoe (1997). The same procedures (as in SPT and CPT) have been applied for Vs also. SVM outperforms ANN model for all three models based on SPT, CPT and Vs data. CPT method gives better result than SPT and Vs for both ANN and SVM models. For CPT and SPT, two input parameters {PGA and qc or (N1)60} are sufficient input parameters to determine the liquefaction susceptibility using SVM model. In this study, an attempt has also been made to evaluate geotechnical site characterization by carrying out in situ tests using different in situ techniques such as CPT, SPT and multi channel analysis of surface wave (MASW) techniques. For this purpose a typical site was selected wherein a man made homogeneous embankment and as well natural ground has been met. For this typical site, in situ tests (SPT, CPT and MASW) have been carried out in different ground conditions and the obtained test results are compared. Three CPT continuous test profiles, fifty-four SPT tests and nine MASW test profiles with depth have been carried out for the selected site covering both homogeneous embankment and natural ground. Relationships have been developed between Vs, (N1)60 and qc values for this specific site. From the limited test results, it was found that there is a good correlation between qc and Vs. Liquefaction susceptibility is evaluated using the in situ test data from (N1)60, qc and Vs using ANN and SVM models. It has been shown to compare well with “Idriss and Boulanger, 2004” approach based on SPT test data. SVM model has been also adopted to determine over consolidation ratio (OCR) based on piezocone data. Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. SVM model outperforms all the available methods for OCR prediction.

Page generated in 0.117 seconds