Spelling suggestions: "subject:"artificial, beural, bnetwork"" "subject:"artificial, beural, conetwork""
481 |
Performance evaluation of two machine learning algorithms for classification in a production line : Comparing artificial neural network and support vector machine using a quasi-experimentJörlid, Olle, Sundbeck, Erik January 2024 (has links)
This thesis investigated the possibility of using machine learning algorithms for classifying items in a queuing system to optimize a production line. The evaluated algorithms are Artificial Neural Network (ANN) and Support Vector Machine (SVM), selected based on other research projects. A quasi-experiment evaluates the two machine learning algorithms trained on the same data. The dataset used in the experiment was complex and contained 47,212 rows of samples with features of items from a production setting. Both models performed better than the current system, with ANN reaching 97,5% and SVM 98% on all measurements. The ANN and SVM models differed in training time where ANN took almost 205 seconds and SVM took 1.97 seconds, ANN was however 20 times faster to classify. We conclude that ANN and SVM are feasible options for using Artificial Intelligence (AI) to classify items in an industrial environment with similar scenarios.
|
482 |
Fracture Characteristics Of Self Consolidating ConcreteNaddaf, Hamid Eskandari 07 1900 (has links)
Self-consolidating concrete (SCC) has wide use for placement in congested reinforced concrete structures in recent years. SCC represents one of the most outstanding advances in concrete technology during the last two decades. In the current work a great deal of cognizance pertaining to mechanical properties of SCC and comparison of fracture characteristics of notched and unnotched beams of plain concrete as well as using acoustic emission to understand the localization of crack patterns at different stages has been done.
An artificial neural network (ANN) is proposed to predict the 28day compressive strength of a normal and high strength of SCC and HPC with high volume fly ash. The ANN is trained by the data available in literature on normal volume fly ash because data on SCC with high volume fly ash is not available in sufficient quantity.
Fracture characteristics of notched and unnotched beams of plain self consolidating concrete using acoustic emission to understand the localization of crack patterns at different stages has been done. Considering this as a platform, further analysis has been done using moment tensor analysis as a new notion to evaluate fracture characteristics in terms of crack orientation, direction of crack propagation at nano and micro levels. Analysis of B-value (b-value based on energy) is also carried out, and this has introduced to a new idea of carrying out the analysis on the basis of energy which gives a clear picture of results when compared with the analysis carried out using amplitudes.
Further a new concept is introduced to analyze crack smaller than micro (could be hepto cracks) in solid materials. Each crack formation corresponds to an AE event and is processed and analyzed for crack orientation, crack volume at hepto and micro levels using moment tensor analysis based on energy. Cracks which are tinier than microcracks (could be hepto), are formed in large numbers at very early stages of loading prior to peak load. The volume of hepto and micro cracks is difficult to measure physically, but could be characterized using AE data in moment tensor analysis based on energy. It is conjectured that the ratio of the volume of hepto to that of micro could reach a critical value which could be an indicator of onset of microcracks after the formation of hepto cracks.
|
483 |
Accuracy Improvement of Predictive Neural Networks for Managing Energy in Solar Powered Wireless Sensor NodesAl_Omary, Murad 20 December 2019 (has links)
Das drahtlose Sensornetzwerk (WSN) ist eine Technologie, die Umgebungsbedingungen oder physikalische Parameter misst, weiterleitet und per Fernüberwachung zur Verfügung stellt. Normalerweise werden die Sensorknoten, die diese Netzwerke bilden, von Batterien gespeist. Diese sollen aus verschiedenen Gründen nicht mehr verwendet werden, sondern es wird auf eine eigenständige Stromversorgung gesetzt. Dies soll den aufwendigen Austausch und die Wartung minimieren. Energy Harvesting kann mit den Knoten verwendet werden, um die Batterien zu unterstützen und die Lebensdauer der Netzwerke zu verlängern.
Aufgrund der hohen Leistungsdichte der Solarenergie im Vergleich zu verschiedenen anderen Umweltenergien sind Solarzellen die am häufigsten eingesetzten Wandler, allerdings stellt die schwankende und intermittierende Natur der Solarenergie eine Herausforderung dar, einen funktionalen und zuverlässigen Sensorknoten zu versorgen.
Um den Sensorknoten effektiv zu betreiben, sollte sein Energieverbrauch sinnvoll gesteuert werden. Ein interessanter Ansatz zu diesem Zweck ist die Steuerung der Aktivitäten des Knotens in Abhängigkeit von der zukünftig verfügbaren Energie. Dies erfordert eine Vorhersage der wandelbaren Sonnenenergie für die kommenden Betriebszeiten einschließlich der freien Zeiten der Sonne. Einige Vorhersagealgorithmen wurden mit stochastischen und statistischen Prinzipien sowie mit Methoden der künstlichen Intelligenz (KI) erstellt. Durch diese Algorithmen bleibt ein erheblicher Vorhersagefehler von 5-70%, der den zuverlässigen Betrieb der Knoten beeinträchtigt. Beispielsweise verwenden die stochastischen Methoden einen diskreten Energiezustand, der meist nicht zu den tatsächlichen Messwerten passt. Die statistischen Methoden verwenden einen Gewichtungsfaktor für die zuvor registrierten Messwerte. Daher sind sie nur geeignet, um Energieprofile bei konstanten Wetterbedingungen vorherzusagen. KI-Methoden erfordern große Beobachtungen im Trainingsprozess, die den benötigten Speicherplatz erhöhen. Dementsprechend ist die Leistung hinsichtlich der Vorhersagegenauigkeit dieser Algorithmen nicht ausreichend.
In dieser Arbeit wird ein Vorhersagealgorithmus mit einem neuronalen Netzwerk entwickelt und eingebunden in einen Mikrocontroller, um die Verwaltung des Energieverbrauchs von solarzellengesteuerten Sensorknoten zu optimieren. Das verwendete neuronale Netzwerk wurde mit einer Kombination aus meteorologischen und statistischen Eingangsparametern realisiert. Dies hat zum Ziel, die erforderlichen Designkriterien für Sensorknoten zu erfüllen und eine Leistung zu erreichen, die in ihrer Genauigkeit die Leistung der oben genannten traditionellen Algorithmen übersteigt. Die Vorhersagegenauigkeit die durch den Korrelationskoeffizienten repräsentiert wird, wurde für das entwickelte neuronale Netzwerk auf 0,992 bestimmt. Das genaueste traditionelle Netzwerk erreicht nur einen Wert von 0,963.
Das entwickelte neuronale Netzwerk wurde in einen Prototyp eines Sensorknotens integriert, um die Betriebszustände oder -modi über einen Simulationszeitraum von einer Woche anzupassen. Während dieser Zeit hat der Sensorknoten 6 Stunden zusätzlich im Normalbetrieb gearbeitet. Dies trug dazu bei, eine effektive Nutzung der verfügbaren Energie um ca. 3,6% besser zu erfüllen als das genaueste traditionelle Netz. Dadurch wird eine längere Lebensdauer und Zuverlässigkeit des Sensorknotens erreicht. / Wireless Sensor Network (WSN) is a technology that measures an environmental or physical parameters in order to use them by decision makers with a possibility of remote monitoring. Normally, sensor nodes that compose these networks are powered by batteries which are no longer feasible, especially when they used as fixed and standalone power source. This is due to the costly replacement and maintenance. Ambient energy harvesting systems can be used with these nodes to support the batteries and to prolong the lifetime of these networks.
Due to the high power density of solar energy in comparison with different environmental energies, solar cells are the most utilized harvesting systems. Although that, the fluctuating and intermittent nature of solar energy causes a real challenge against fulfilling a functional and reliable sensor node.
In order to operate the sensor node effectively, its energy consumption should be well managed. One interesting approach for this purpose is to control the future node’s activities according to the prospective energy available. This requires performing a prior prediction of the harvestable solar energy for the upcoming operation periods including the sun’s free times. A few prediction algorithms have been created using stochastic and statistical principles as well as artificial intelligence (AI) methods. A considerable prediction error of 5-70% is realized by these algorithms affecting the reliable operation of the nodes. For example, the stochastic ones use a discrete energy states which are mostly do not fit the actual readings. The statistical methods use a weighting factors for the previous registered readings. Thus, they are convenient only to predict energy profiles under consistent weather conditions. AI methods require large observations to be used in the training process which increase the memory space needed. Accordingly, the performance concerning the prediction accuracy of these algorithms is not sufficient.
In this thesis, a prediction algorithm using a neural network has been proposed and implemented in a microcontroller for managing energy consumption of solar cell driven sensor nodes. The utilized neural network has been developed using a combination of meteorological and statistical input parameters. This is to meet a required design criteria for the sensor nodes and to fulfill a performance exceeds in its accuracy the performance of aforementioned traditional algorithms. The prediction accuracy represented by the correlation coefficient has been registered for the developed neural network to be 0.992, which increases the most accurate traditional network which has a value 0.963.
The developed neural network has been embedded into a sensor node prototype to adjust the operating states or modes over a simulation period of one week. During this period, the sensor node has worked 6 hours more towards normal operation mode. This in its role helped to fulfill an effective use of available energy approximately 3.6% better than the most accurate traditional network. Thus, longer lifetime and more reliable sensor node.
|
484 |
High-Performance Network-on-Chip Design for Many-Core ProcessorsWang, Boqian January 2020 (has links)
With the development of on-chip manufacturing technologies and the requirements of high-performance computing, the core count is growing quickly in Chip Multi/Many-core Processors (CMPs) and Multiprocessor System-on-Chip (MPSoC) to support larger scale parallel execution. Network-on-Chip (NoC) has become the de facto solution for CMPs and MPSoCs in addressing the communication challenge. In the thesis, we tackle a few key problems facing high-performance NoC designs. For general-purpose CMPs, we encompass a full system perspective to design high-performance NoC for multi-threaded programs. By exploring the cache coherence under the whole system scenario, we present a smart communication service called Advance Virtual Channel Reservation (AVCR) to provide a highway to target packets, which can greatly reduce their contention delay in NoC. AVCR takes advantage of the fact that we can know or predict the destination of some packets ahead of their arrival at the Network Interface (NI). Exploiting the time interval before a packet is ready, AVCR establishes an end-to-end highway from the source NI to the destination NI. This highway is built up by reserving the Virtual Channel (VC) resources ahead of the target packet transmission and offering priority service to flits in the reserved VC in the wormhole router, which can avoid the target packets’ VC allocation and switch arbitration delay. Besides, we also propose an admission control method in NoC with a centralized Artificial Neural Network (ANN) admission controller, which can improve system performance by predicting the most appropriate injection rate of each node using the network performance information. In the online control process, a data preprocessing unit is applied to simplify the ANN architecture and make the prediction results more accurate. Based on the preprocessed information, the ANN predictor determines the control strategy and broadcasts it to each node where the admission control will be applied. For application-specific MPSoCs, we focus on developing high-performance NoC and NI compatible with the common AMBA AXI4 interconnect protocol. To offer the possibility of utilizing the AXI4 based processors and peripherals in the on-chip network based system, we propose a whole system architecture solution to make the AXI4 protocol compatible with the NoC based communication interconnect in the many-core system. Due to possible out-of-order transmission in the NoC interconnect, which conflicts with the ordering requirements specified by the AXI4 protocol, in the first place, we especially focus on the design of the transaction ordering units, realizing a high-performance and low cost solution to the ordering requirements. The microarchitectures and the functionalities of the transaction ordering units are also described and explained in detail for ease of implementation. Then, we focus on the NI and the Quality of Service (QoS) support in NoC. In our design, the NI is proposed to make the NoC architecture independent from the AXI4 protocol via message format conversion between the AXI4 signal format and the packet format, offering high flexibility to the NoC design. The NoC based communication architecture is designed to support high-performance multiple QoS schemes. The NoC system contains Time Division Multiplexing (TDM) and VC subnetworks to apply multiple QoS schemes to AXI4 signals with different QoS tags and the NI is responsible for traffic distribution between two subnetworks. Besides, a QoS inheritance mechanism is applied in the slave-side NI to support QoS during packets’ round-trip transfer in NoC. / Med utvecklingen av tillverkningsteknologi av on-chip och kraven på högpresterande da-toranläggning växer kärnantalet snabbt i Chip Multi/Many-core Processors (CMPs) ochMultiprocessor Systems-on-Chip (MPSoCs) för att stödja större parallellkörning. Network-on-Chip (NoC) har blivit den de facto lösningen för CMP:er och MPSoC:er för att mötakommunikationsutmaningen. I uppsatsen tar vi upp några viktiga problem med hög-presterande NoC-konstruktioner.Allmänna CMP:er omfattas ett fullständigt systemperspektiv för att design högprester-ande NoC för flertrådad program. Genom att utforska cachekoherensen under hela system-scenariot presenterar vi en smart kommunikationstjänst, AVCR (Advance Virtual ChannelReservation) för att tillhandahålla en motorväg till målpaket, vilket i hög grad kan min-ska deras förseningar i NoC. AVCR utnyttjar det faktum att vi kan veta eller förutsägadestinationen för vissa paket före deras ankomst till nätverksgränssnittet (Network inter-face, NI). Genom att utnyttja tidsintervallet innan ett paket är klart, etablerar AVCRen ände till ände motorväg från källan NI till destinationen NI. Denna motorväg byggsupp genom att reservera virtuell kanal (Virtual Channel, VC) resurser före målpaket-söverföringen och erbjuda prioriterade tjänster till flisar i den reserverade VC i wormholerouter. Dessutom föreslår vi också en tillträdeskontrollmetod i NoC med en centraliseradartificiellt neuronät (Artificial Neural Network, ANN) tillträdeskontroll, som kan förbättrasystemets prestanda genom att förutsäga den mest lämpliga injektionshastigheten för varjenod via nätverksprestationsinformationen. I onlinekontrollprocessen används en förbehan-dlingsenhet på data för att förenkla ANN-arkitekturen och göra förutsägningsresultatenmer korrekta. Baserat på den förbehandlade informationen bestämmer ANN-prediktornkontrollstrategin och sänder den till varje nod där tillträdeskontrollen kommer att tilläm-pas.För applikationsspecifika MPSoC:er fokuserar vi på att utveckla högpresterande NoCoch NI kompatibla med det gemensamma AMBA AXI4 protokoll. För att erbjuda möj-ligheten att använda AXI4-baserade processorer och kringutrustning i det on-chip baseradenätverkssystemet föreslår vi en hel systemarkitekturlösning för att göra AXI4 protokolletkompatibelt med den NoC-baserade kommunikation i det multikärnsystemet. På grundav den out-of-order överföring i NoC, som strider mot ordningskraven som anges i AXI4-protokollet, fokuserar vi i första hand på utformningen av transaktionsordningsenheterna,för att förverkliga en hög prestanda och låg kostnad-lösning på ordningskraven. Sedanfokuserar vi på NI och Quality of Service (QoS)-stödet i NoC. I vår design föreslås NI attgöra NoC-arkitekturen oberoende av AXI4-protokollet via meddelandeformatkonverteringmellan AXI4 signalformatet och paketformatet, vilket erbjuder NoC-designen hög flexi-bilitet. Den NoC-baserade kommunikationsarkitekturen är utformad för att stödja fleraQoS-schema med hög prestanda. NoC-systemet innehåller Time-Division Multiplexing(TDM) och VC-subnät för att tillämpa flera QoS-scheman på AXI4-signaler med olikaQoS-taggar och NI ansvarar för trafikdistribution mellan två subnät. Dessutom tillämpasen QoS-arvsmekanism i slav-sidan NI för att stödja QoS under paketets tur-returöverföringiNoC / <p>QC 20201008</p>
|
485 |
AI inom radiologi, nuläge och framtid / AI in radiology, now and the futureTäreby, Linus, Bertilsson, William January 2023 (has links)
Denna uppsats presenterar resultaten av en kvalitativ undersökning som syftar till att ge en djupare förståelse för användningen av AI inom radiologi, dess framtida påverkan på yrket och hur det används idag. Genom att genomföra tre intervjuer med personer som arbetar inom radiologi, har datainsamlingen fokuserat på att identifiera de positiva och negativa aspekterna av AI i radiologi, samt dess potentiella konsekvenser på yrket. Resultaten visar på en allmän acceptans för AI inom radiologi och dess förmåga att förbättra diagnostiska processer och effektivisera arbetet. Samtidigt finns det en viss oro för att AI kan ersätta människor och minska behovet av mänskliga bedömningar. Denna uppsats ger en grundläggande förståelse för hur AI används inom radiologi och dess möjliga framtida konsekvenser. / This essay presents the results of a qualitative study aimed at gaining a deeper understanding of the use of artificial intelligence (AI) in radiology, its potential impact on the profession and how it’s used today. By conducting three interviews with individuals working in radiology, data collection focused on identifying the positive and negative aspects of AI in radiology, as well as its potential consequences on the profession. The results show a general acceptance of AI in radiology and its ability to improve diagnostic processes and streamline work. At the same time, there is a certain concern that AI may replace humans and reduce the need for human judgments. This report provides a basic understanding of how AI is used in radiology and its possible future consequences.
|
486 |
類神經網路在汽車保險費率擬訂的應用 / Artificial Neural Network Applied to Automobile Insurance Ratemaking陳志昌, Chen, Chi-Chang Season Unknown Date (has links)
自1999年以來,台灣汽車車體損失險的投保率下降且損失率逐年上升,與強制第三責任險損失率逐年下降形成強烈對比,理論上若按個人風險程度計收保費,吸引價格認同的被保險人加入並對高風險者加費,則可提高投保率並且確保損失維持在合理範圍內。基於上述背景,本文採用國內某產險公司1999至2002年汽車車體損失保險資料為依據,探討過去保費收入與未來賠款支出的關係,在滿足不偏性的要求下,尋求降低預測誤差變異數的方法。
研究結果顯示:車體損失險存在保險補貼。以最小誤差估計法計算的新費率,可以改善收支不平衡的現象,但對於應該減費的低風險保戶,以及應該加費的高高風險保戶,以類神經網路推計的加減費系統具有較大加減幅度,因此更能有效的區分高低風險群組,降低不同危險群組間的補貼現象,並在跨年度的資料中具有較小的誤差變異。 / In the past five years, the insured rate of Automobile Material Damage Insurance (AMDI) has been declined but the loss ratio is climbing, in contrast to the decreasing trend in the loss ratio of the compulsory automobile liability insurance. By charging corresponding premium based on individual risks, we could attract low risk entrant and reflect the highly risk costs. The loss ratio can thus be modified to a reasonable level. To further illustrate the concept, we aim to take the AMDI to study the most efficient estimator of the future claim. Because the relationship of loss experience (input) and future claim estimation (output) is similar to the human brain performs. We can analyze the relation by minimum bias procedure and artificial neural network, reducing error with overall rate level could go through with minimum error of classes or individual, demonstrated using policy year 1999 to 2002 data.
According to the thesis, cross subsidization exists in Automobile Material Damage Insurance. The new rate produced by minimum bias estimate can alleviate the unbalance between the premium and loss. However the neural network classification rating can allocate those premiums more fairly, where ‘fairly’ means that higher premiums are paid by those insured with greater risk of loss and vice-versa. Also, it is the more efficient than the minimum bias estimator in the panel data.
|
487 |
基於EEMD與類神經網路預測方法進行台股投資組合交易策略 / Portfolio of stocks trading by using EEMD-based neural network learning paradigms賴昱君, Lai, Yu Chun Unknown Date (has links)
對投資者而言,投資股市的目的就是賺錢,但影響股價因素眾多,我們要如何判斷明天是漲是跌?因此如何建立一個準確的預測模型,一直是財務市場研究的課題之一,然而財務市場一直被認為是一個複雜.充滿不確定性及非線性的動態系統,這也是在建構模型上一個很大的阻礙,本篇研究中使用的EEMD方法則適合解決如金融市場或氣候等此類的非線性問題及有趨勢性的資料上。
在本研究中,我們將EEMD結合ANN建構出兩種不同形式的模型去進行台股個股的預測,也試圖改善ARMA模型使其預測效果較好;此外為了能夠達到分散風險的效果,採用了投資組合的方式,在權重的決定上,我們結合動態與靜態的方式來計算權重;至於在交易策略上,本研究也加入了移動平均線,希望能找到最適合的預測模型,本研究所使用的標的物為曾在該期間被列為注意股票的10檔股票。
另外,我們也分析了影響台股個股價格波動的因素,透過EEMD拆解,我們能夠從中得到具有不同意義的本徵模態函數(IMF),藉由統計值分析重要的IMF其所代表的意義。例如:影響高頻波動的重要因素為新聞媒體或突發事件,影響中頻的重要因素為法人買賣及季報,而影響低頻的重要因素則為季節循環。
結果顯示,EEMD-ANN Model 1是一個穩健的模型,能夠創造出將近20%的年報酬率,其次為EEMD-ANN Model 2,在搭配移動平均線的策略後,表現與Model 1差不多,但在沒有配合移動平均線策略時,雖報酬率仍為正,但較不穩定,因此從研究結果也可以看到,EEMD-ANN的模型皆表現比ARMA的預測模型好。 / The main purpose of investing is to earn profits for an investor, but there are many factors that can influence stock price. Investments want to know the price will rise or fall tomorrow. Therefore, how to establish an accurate forecasting model is one of the important issue that researched by researchers of financial market. However, the financial market is considered of a complex, uncertainty, and non-linear dynamic systems. These characteristics are obstacles on constructing model. The measure, EEMD, used in this study is suitable to solve questions that are non-linear but have trends such as financial market, climate and so on.
In this thesis, we used three models including ARMA model and two types of EEMD-ANN composite models to forecast the stock price. In addition, we tried to improve ARMA model, so a new model was proposed. Through EEMD, the fluctuation of stock price can be decomposed into several IMFs with different economical meanings. Moreover, we adopted portfolio approach to spread risks. We integrate the static weight and the dynamic weight to decide the optimal weights. Also, we added the moving average indicator to our trading strategy. The subject matters in this study are 10 attention stocks.
Our results showed that EEMD-ANN Model 1 is a robust model. It is not only the best model but also can produce near 20% of 1-year return ratio. We also find that our EEMD-ANN model have better outcome than those of the traditional ARMA model. Owing to that, the increases of trading performance would be expected via the selected EEMD-ANN model.
|
488 |
Hidrodinamika i prenos mase u airlift reaktoru sa membranom / Hydrodynamics and mass transfer of an airlift reactor with inserted membraneKojić Predrag 20 May 2016 (has links)
<p>U okviru doktorske disertacije izvedena su eksperimentalna istraživanja osnovnih hidrodinamičkih i maseno-prenosnih karakteristika airlift reaktora sa spoljnom recirkulacijom sa ugrađenom višekanalnom cevnom membranom u silaznu cev (ALSRM). ALSRM je radio na dva načina rada: bez mehurova u silaznoj cevi (način rada A) i sa mehurovima u silaznoj cevi (način rada B) u zavisnosti od nivoa tečnosti u gasnom separatoru. Ispitivani su uticaji prividne brzine gasa, površinskih osobina tečne faze, tipa distributora gasa i prisustva mehurova gasa u silaznoj cevi na sadržaj gasa, brzinu tečnosti u silaznoj cevi i zapreminski koeficijent prenosa mase u tečnoj fazi u ALSRM. Rezultati su poređeni sa vrednostima dobijenim u istom reaktoru ali bez membrane (ALSR). Sadržaj gasa u uzlaznoj i silaznoj cevi određivan je pomoću piezometarskih cevi merenjem hidrostatičkog pritiska na dnu i vrhu uzlazne i silazne cevi. Brzina tečnosti merena je pomoću konduktometrijskih elektroda dok je zapreminski koeficijent prenosa mase dobijen primenom dinamičke metode merenjem promene koncentracije kiseonika u vremenu optičkom elektrodom. Eksperimentalni rezultati pokazuju da sadržaj gasa, brzina tečnosti i zapreminski koeficijent prenosa mase zavise od prividne brzine gasa, vrste alkohola i tipa distributora gasa kod oba reaktora. Višekanalna cevna membrana u silaznoj cevi uzrokovala je povećanje ukupnog koeficijenta trenja za 90% i time dovela do smanjenja brzine tečnosti u silaznoj cevi do 50%. Smanjena brzina tečnosti u silaznoj cevi povećala je sadržaj gasa do 16%. Predložene neuronske mreže i empirijske korelacije odlično predviđaju vrednosti za sadržaj gasa, brzinu tečnosti i zapreminski koeficijent prenosa mase.</p> / <p>An objective of this study was to investigate the hydrodynamics and the gas-liquid mass transfer coefficient of an external-loop airlift membrane reactor (ELAMR). The ELAMR was operated in two modes: without (mode A), and with bubbles in the downcomer (mode B), depending on the liquid level in the gas separator. The influence of superficial gas velocity, gas distributor’s geometry and various diluted alcohol solutions on hydrodynamics and gas-liquid mass transfer coefficient of the ELAMR was studied. Results are commented with respect to the external loop airlift reactor of the same geometry but without membrane in the downcomer (ELAR). The gas holdup values in the riser and the downcomer were obtained by measuring the pressures at the bottom and the top of the riser and downcomer using piezometric tubes. The liquid velocity in the downcomer was determined by the tracer response method by two conductivity probes in the downcomer. The volumetric mass transfer coefficient was obtained by using the dynamic oxygenation method by dissolved oxygen probe. According to experimental results gas holdup, liquid velocity and gas-liquid mass transfer coefficient depend on superficial gas velocity, type of alcohol solution and gas distributor for both reactors. Due to the presence of the multichannel membrane in the downcomer, the overall hydrodynamic resistance increased up to 90%, the liquid velocity in the downcomer decreased up to 50%, while the gas holdup in the riser of the ELAMR increased maximally by 16%. The values of the gas holdup, the liquid velocity and the gas-liquid mass transfer coefficient predicted by the application of empirical power law correlations and feed forward back propagation neural network (ANN) are in very good agreement with experimental values.</p>
|
489 |
Étude et conception d'un système automatisé de contrôle d'aspect des pièces optiques basé sur des techniques connexionnistes / Investigation and design of an automatic system for optical devices' defects detection and diagnosis based on connexionist approachVoiry, Matthieu 15 July 2008 (has links)
Dans différents domaines industriels, la problématique du diagnostic prend une place importante. Ainsi, le contrôle d’aspect des composants optiques est une étape incontournable pour garantir leurs performances opérationnelles. La méthode conventionnelle de contrôle par un opérateur humain souffre de limitations importantes qui deviennent insurmontables pour certaines optiques hautes performances. Dans ce contexte, cette thèse traite de la conception d’un système automatique capable d’assurer le contrôle d’aspect. Premièrement, une étude des capteurs pouvant être mis en oeuvre par ce système est menée. Afin de satisfaire à des contraintes de temps de contrôle, la solution proposée utilise deux capteurs travaillant à des échelles différentes. Un de ces capteurs est basé sur la microscopie Nomarski ; nous présentons ce capteur ainsi qu’un ensemble de méthodes de traitement de l’image qui permettent, à partir des données fournies par celui-ci, de détecter les défauts et de déterminer la rugosité, de manière robuste et répétable. L’élaboration d’un prototype opérationnel, capable de contrôler des pièces optiques de taille limitée valide ces différentes techniques. Par ailleurs, le diagnostic des composants optiques nécessite une phase de classification. En effet, si les défauts permanents sont détectés, il en est de même pour de nombreux « faux » défauts (poussières, traces de nettoyage. . . ). Ce problème complexe est traité par un réseau de neurones artificiels de type MLP tirant partie d’une description invariante des défauts. Cette description, issue de la transformée de Fourier-Mellin est d’une dimension élevée qui peut poser des problèmes liés au « fléau de la dimension ». Afin de limiter ces effets néfastes, différentes techniques de réduction de dimension (Self Organizing Map, Curvilinear Component Analysis et Curvilinear Distance Analysis) sont étudiées. On montre d’une part que les techniques CCA et CDA sont plus performantes que SOM en termes de qualité de projection, et d’autre part qu’elles permettent d’utiliser des classifieurs de taille plus modeste, à performances égales. Enfin, un réseau de neurones modulaire utilisant des modèles locaux est proposé. Nous développons une nouvelle approche de décomposition des problèmes de classification, fondée sur le concept de dimension intrinsèque. Les groupes de données de dimensionnalité homogène obtenus ont un sens physique et permettent de réduire considérablement la phase d’apprentissage du classifieur tout en améliorant ses performances en généralisation / In various industrial fields, the problem of diagnosis is of great interest. For example, the check of surface imperfections on an optical device is necessary to guarantee its operational performances. The conventional control method, based on human expert visual inspection, suffers from limitations, which become critical for some high-performances components. In this context, this thesis deals with the design of an automatic system, able to carry out the diagnosis of appearance flaws. To fulfil the time constraints, the suggested solution uses two sensors working on different scales. We present one of them based on Normarski microscopy, and the image processing methods which allow, starting from issued data, to detect the defects and to determine roughness in a reliable way. The development of an operational prototype, able to check small optical components, validates the proposed techniques. The final diagnosis also requires a classification phase. Indeed, if the permanent defects are detected, many “false” defects (dust, cleaning marks. . . ) are emphasized as well. This complex problem is solved by a MLP Artificial Neural Network using an invariant description of the defects. This representation, resulting from the Fourier-Mellin transform, is a high dimensional vector, what implies some problems linked to the “curse of dimensionality”. In order to limit these harmful effects, various dimensionality reduction techniques (Self Organizing Map, Curvilinear Component Analysis and Curvilinear Distance Analysis) are investigated. On one hand we show that CCA and CDA are more powerful than SOM in terms of projection quality. On the other hand, these methods allow using more simple classifiers with equal performances. Finally, a modular neural network, which exploits local models, is developed. We proposed a new classification problems decomposition scheme, based on the intrinsic dimension concept. The obtained data clusters of homogeneous dimensionality have a physical meaning and permit to reduce significantly the training phase of the classifier, while improving its generalization performances
|
490 |
Input Calibration, Code Validation and Surrogate Model Development for Analysis of Two-phase Circulation Instability and Core Relocation PhenomenaPhung, Viet-Anh January 2017 (has links)
Code validation and uncertainty quantification are important tasks in nuclear reactor safety analysis. Code users have to deal with large number of uncertain parameters, complex multi-physics, multi-dimensional and multi-scale phenomena. In order to make results of analysis more robust, it is important to develop and employ procedures for guiding user choices in quantification of the uncertainties. The work aims to further develop approaches and procedures for system analysis code validation and application to practical problems of safety analysis. The work is divided into two parts. The first part presents validation of two reactor system thermal-hydraulic (STH) codes RELAP5 and TRACE for prediction of two-phase circulation flow instability. The goals of the first part are to: (a) develop and apply efficient methods for input calibration and STH code validation against unsteady flow experiments with two-phase circulation flow instability, and (b) examine the codes capability to predict instantaneous thermal hydraulic parameters and flow regimes during the transients. Two approaches have been developed: a non-automated procedure based on separate treatment of uncertain input parameters (UIPs) and an automated method using genetic algorithm. Multiple measured parameters and system response quantities (SRQs) are employed in both calibration of uncertain parameters in the code input deck and validation of RELAP5 and TRACE codes. The effect of improvement in RELAP5 flow regime identification on code prediction of thermal-hydraulic parameters has been studied. Result of the code validations demonstrates that RELAP5 and TRACE can reproduce qualitative behaviour of two-phase flow instability. However, both codes misidentified instantaneous flow regimes, and it was not possible to predict simultaneously experimental values of oscillation period and maximum inlet flow rate. The outcome suggests importance of simultaneous consideration of multiple SRQs and different test regimes for quantitative code validation. The second part of this work addresses core degradation and relocation to the lower head of a boiling water reactor (BWR). Properties of the debris in the lower head provide initial conditions for vessel failure, melt release and ex-vessel accident progression. The goals of the second part are to: (a) obtain a representative database of MELCOR solutions for characteristics of debris in the reactor lower plenum for different accident scenarios, and (b) develop a computationally efficient surrogate model (SM) that can be used in extensive uncertainty analysis for prediction of the debris bed characteristics. MELCOR code coupled with genetic algorithm, random and grid sampling methods was used to generate a database of the full model solutions and to investigate in-vessel corium debris relocation in a Nordic BWR. Artificial neural networks (ANNs) with classification (grouping) of scenarios have been used for development of the SM in order to address the issue of chaotic response of the full model especially in the transition region. The core relocation analysis shows that there are two main groups of scenarios: with relatively small (<20 tons) and large (>100 tons) amounts of total relocated debris in the reactor lower plenum. The domains are separated by transition regions, in which small variation of the input can result in large changes in the final mass of debris. SMs using multiple ANNs with/without weighting between different groups effectively filter out the noise and provide a better prediction of the output cumulative distribution function, but increase the mean squared error compared to a single ANN. / Validering av datorkoder och kvantifiering av osäkerhetsfaktorer är viktiga delar vid säkerhetsanalys av kärnkraftsreaktorer. Datorkodanvändaren måste hantera ett stort antal osäkra parametrar vid beskrivningen av fysikaliska fenomen i flera dimensioner från mikro- till makroskala. För att göra analysresultaten mer robusta, är det viktigt att utveckla och tillämpa rutiner för att vägleda användaren vid kvantifiering av osäkerheter.Detta arbete syftar till att vidareutveckla metoder och förfaranden för validering av systemkoder och deras tillämpning på praktiska problem i säkerhetsanalysen. Arbetet delas in i två delar.Första delen presenterar validering av de termohydrauliska systemkoderna (STH) RELAP5 och TRACE vid analys av tvåfasinstabilitet i cirkulationsflödet.Målen för den första delen är att: (a) utveckla och tillämpa effektiva metoder för kalibrering av indatafiler och validering av STH mot flödesexperiment med tvåfas cirkulationsflödeinstabilitet och (b) granska datorkodernas förmåga att förutsäga momentana termohydrauliska parametrar och flödesregimer under transienta förlopp.Två metoder har utvecklats: en icke-automatisk procedur baserad på separat hantering av osäkra indataparametrar (UIPs) och en automatiserad metod som använder genetisk algoritm. Ett flertal uppmätta parametrar och systemresponser (SRQs) används i både kalibrering av osäkra parametrar i indatafilen och validering av RELAP5 och TRACE. Resultatet av modifikationer i hur RELAP5 identifierar olika flödesregimer, och särskilt hur detta påverkar datorkodens prediktioner av termohydrauliska parametrar, har studerats.Resultatet av valideringen visar att RELAP5 och TRACE kan återge det kvalitativa beteende av två-fas flödets instabilitet. Däremot kan ingen av koderna korrekt identifiera den momentana flödesregimen, det var därför ej möjligt att förutsäga experimentella värden på svängningsperiod och maximal inloppsflödeshastighet samtidigt. Resultatet belyser betydelsen av samtidig behandling av flera SRQs liksom olika experimentella flödesregimer för kvantitativ kodvalidering.Den andra delen av detta arbete behandlar härdnedbrytning och omfördelning till reaktortankens nedre plenumdel i en kokarvatten reaktor (BWR). Egenskaper hos härdrester i nedre plenum ger inledande förutsättningar för reaktortanksgenomsmältning, hur smältan rinner ut ur reaktortanken och händelseförloppet i reaktorinneslutningen.Målen i den andra delen är att: (a) erhålla en representativ databas över koden MELCOR:s analysresultat för egenskaperna hos härdrester i nedre plenum under olika händelseförlopp, och (b) utveckla en beräkningseffektiv surrogatsmodell som kan användas i omfattande osäkerhetsanalyser för att förutsäga partikelbäddsegenskaper.MELCOR, kopplad till en genetisk algoritm med slumpmässigt urval användes för att generera en databas av analysresultat med tillämpning på smältans omfördelning i reaktortanken i en Nordisk BWR.Analysen av hur härden omfördelas visar att det finns två huvudgrupper av scenarier: med relativt liten (<20 ton) och stor (> 100 ton) total mängd omfördelade härdrester i nedre plenum. Dessa domäner är åtskilda av övergångsregioner, där små variationer i indata kan resultera i stora ändringar i den slutliga partikelmassan. Flergrupps artificiella neurala nätverk med klassificering av händelseförloppet har använts för utvecklingen av en surrogatmodell för att hantera problemet med kaotiska resultat av den fullständiga modellen, särskilt i övergångsregionen. / <p>QC 20170309</p>
|
Page generated in 0.0831 seconds