• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 37
  • 18
  • 17
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 239
  • 39
  • 37
  • 36
  • 36
  • 34
  • 29
  • 24
  • 23
  • 20
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Non-intrusive Logging and Monitoring System of a Parameterized Hardware-in-the-loop Real-Time Simulator / Icke-påträngande loggnings och övervakningssystem för en parametrerad hårdvara-in-the-loop realtidsimulator

Andung Muntaha, Muhamad January 2019 (has links)
Electronic Control Unit (ECU) is a crucial component in today’s vehicle. In a complete vehicle, there are many ECUs installed. Each of these controls a single function of the vehicle. During the development cycle of an ECU, its functionality needs to be validated against the requirement specification. The Hardware-in-the-loop (HIL) method is commonly used to do this by testing the ECU in a virtual representation of its controlled system. One crucial part of the HIL testing method is an intermediary component that acts as a bridge between the simulation computer and the ECU under test. This component runs a parameterized real-time system that translates messages from the simulation computer to the ECU under test and vice versa. It has a strict real-time requirement for each of its tasks to complete.A logging and monitoring system is needed to ensure that the intermediary component is functioning correctly. This functionality is implemented in the form of low priority additional tasks that run concurrently with the high priority message translation tasks. The implementation of these tasks, alongside with a distributed system to support the logging and monitoring functionality, is presented in this thesis work.Several execution time measurements are carried out to get the information on how the parameters of a task affect its execution time. Then, the linear regression analysis is used to model the execution time estimation of the parameterized tasks. Finally, the time demand analysis is utilized to provide a guarantee that the system is schedulable. / Elektronisk styrenhet (ECU) är en viktig del i dagens fordon. I ett komplett fordon finns det många ECU installerade. Var och en av dessa kontrollerar en enda funktion hos fordonet. Under en utvecklingscykel för en ecu måste dess funktionalitet valideras mot kravspecifikationen. HIL-metoden (Hardware-in-the-loop) används vanligtvis för att göra detta genom att testa ECU i en virtuell representation av sitt styrda system. En viktig del av HIL-testmetoden är en mellanliggande komponent som fungerar som en bro mellan simuleringsdatorn och den ecu som testas. Denna komponent driver ett parametrerat realtidssystem som översätter meddelanden från simuleringsdatorn till ECU som testas och vice versa. Det har en strikt realtidskrav för att alla uppgifter ska kunna slutföras.Ett loggnings och övervakningssystem behövs för att den mellanliggande komponenten ska fungera korrekt. Denna funktionalitet är implementerad i form av extraordinära uppgifter med låg prioritet som körs samtidigt med de högsta prioritetsuppgifterna för översättningstjänster. Genomförandet av dessa uppgifter, tillsammans med ett distribuerat system för att stödja loggnings och övervakningsfunktionaliteten, presenteras i detta avhandlingararbete.Flera utförandetidsmätningar utförs för att få information om hur parametrarna för en uppgift påverkar dess körtid. Därefter används den linjära regressionsanalysen för att modellera exekveringstidestimeringen av de parametrerade uppgifterna. Slutligen används tidsanalysanalysen för att garantera att systemet är schemaläggbart.
122

Fuzzy criticality assessment for process equipments maintenance

Qi, Hong Sheng, Liu, Q., Wood, Alastair S., Alzaabi, R.N. January 2012 (has links)
- / Criticality-based maintenance (CBM) is a prioritized approach to the maintenance of (industrial) process equipment. CBM requires personnel with a thorough knowledge of the process/equipment under scrutiny. In this paper a criticality assessment system that is implemented by a local company (which represents the expertise and knowledge of the company experts) is reviewed and fuzzy logic theory is applied to improve the system's capability and reliability. The quality of the fuzzy system is evaluated based on several case studies. The results show that the fuzzy logic based system does not only what the conventional system does, but also outperforms in terms of reliability and has a unique ranking capability.
123

Identifying vertices in graphs and digraphs

Skaggs, Robert Duane 28 February 2007 (has links)
The closed neighbourhood of a vertex in a graph is the vertex together with the set of adjacent vertices. A di®erentiating-dominating set, or identifying code, is a collection of vertices whose intersection with the closed neighbour- hoods of each vertex is distinct and nonempty. A di®erentiating-dominating set in a graph serves to uniquely identify all the vertices in the graph. Chapter 1 begins with the necessary de¯nitions and background results and provides motivation for the following chapters. Chapter 1 includes a summary of the lower identi¯cation parameters, °L and °d. Chapter 2 de- ¯nes co-distinguishable graphs and determines bounds on the number of edges in graphs which are distinguishable and co-distinguishable while Chap- ter 3 describes the maximum number of vertices needed in order to identify vertices in a graph, and includes some Nordhaus-Gaddum type results for the sum and product of the di®erentiating-domination number of a graph and its complement. Chapter 4 explores criticality, in which any minor modi¯cation in the edge or vertex set of a graph causes the di®erentiating-domination number to change. Chapter 5 extends the identi¯cation parameters to allow for orientations of the graphs in question and considers the question of when adding orientation helps reduce the value of the identi¯cation parameter. We conclude with a survey of complexity results in Chapter 6 and a collection of interesting new research directions in Chapter 7. / Mathematical Sciences / PhD (Mathematics)
124

Design and quality of service of mixed criticality systems in embedded architectures based on Network-on-Chip (NoC) / Dimensionnement et Qualité de Service pour les systèmes à criticité mixte dans les architectures embarquées à base de Network on Chip (NoC)

Papastefanakis, Ermis 28 November 2017 (has links)
L'évolution de Systems-on-Chip (SoCs) est rapide et le nombre des processeurs augmente conduisant à la transition des les plates-formes Multi-core vers les Manycore. Dans telles plates-formes, l'architecture d'interconnexion a également décalé des bus traditionnels vers les Réseaux sur puce (NoC) afin de faire face à la mise en échelle. Les NoC permettent aux processeurs d'échanger des informations avec la mémoire et les périphériques lors de l'exécution d'une tâche et d'effectuer plusieurs communications en parallèle. Les plates-formes basées sur un NoC sont aussi présentes dans des systèmes embarqués, caractérisés par des exigences comme la prédictibilité, la sécurité et la criticité mixte. Afin de fournir telles fonctionnalités dans les plates-formes commerciales existantes, il faut prendre en considération le NoC qui est un élément clé ayant un impact important sur les performances d'un SoC. Une tâche échange des informations à travers du NoC et par conséquent, son temps d'exécution dépend du temps de transmission des flux qu'elle génère. En calculant le temps de transmission de pire cas (WCTT) des flux dans le NoC, une étape est faite vers le calcul du temps d'exécution de pire cas (WCET) d'une tâche. Ceci contribue à la prédictibilité globale du système. De plus, en prenant en compte les politiques d'arbitrage dans le NoC, il est possible de fournir des garanties de sécurité contre des tâches compromises qui pourraient essayer de saturer les ressources du système (attaque DoS). Dans les systèmes critiques de sécurité, une distinction des tâches par rapport à leur niveau de criticité, permet aux tâches de criticité mixte de coexister et d'exécuter en harmonie. De plus, ça permet aux tâches critiques de maintenir leurs temps d'exécution au prix de tâches de faible criticité qui seront ralenties ou arrêtées. Cette thèse vise à fournir des méthodes et des mécanismes dans le but de contribuer aux axes de prédictibilité, de sécurité et de criticité mixte dans les architectures Manycore basées sur Noc. En outre, l'incitation consiste à relever conjointement les défis dans ces trois axes en tenant compte de leur impact mutuel. Chaque axe a été étudié individuellement, mais très peu de recherche prend en compte leur interdépendance. Cette fusion des aspects est de plus en plus intrinsèque dans des domaines tels que Internet-of-Things, Cyber-Physical Systems (CPS), véhicules connectés et autonomes qui gagnent de l'élan. La raison en est leur haut degré de connectivité qui crée une grande surface d'exposition ainsi que leur présence croissante qui rend l'impact des attaques sévère et visible. Les contributions de cette thèse consistent en une méthode pour fournir une prédictibilité aux flux dans le NoC, un mécanisme pour la sécurité du NoC et une boîte à outils pour la génération de trafic utilisée pour l'analyse comparative. La première contribution est une adaptation de l'approche de la trajectoire traditionnellement utilisée dans les réseaux avioniques (AFDX) pour calculer le WCET. Dans cette thèse, nous identifions les différences et les similitudes dans l'architecture NoC et modifions l'approche de la trajectoire afin de calculer le WCTT des flux NoC. La deuxième contribution est un mécanisme qui permet de détecter les attaques de DoS et d'atténuer leur impact dans un ensemble des flux de criticité mixte. Plus précisément, un mécanisme surveille le NoC et lors de la détection d'un comportement anormal, un deuxième mécanisme d'atténuation s'active. Ce dernier applique des limites de trafic à la source et restreint le taux auquel le NoC est occupé. Cela atténuera l'impact de l'attaque, garantissant la disponibilité des ressources pour les tâches de haute criticité. Finalement NTGEN, est un outil qui peut générer automatiquement des jeux des flux aléatoires mais qui provoquent une occupation NoC prédéterminée. Ces ensembles sont ensuite injectés dans le NoC et les informations sont collectées en fonction de la latence / The evolution of Systems-on-Chip (SoCs) is rapid and the number of processors has increased transitioning from Multi-core to Manycore platforms. In such platforms, the interconnect architecture has also shifted from traditional buses to Networks-on-Chip (NoC) in order to cope with scalability. NoCs allow the processors to exchange information with memory and peripherals during task execution and enable multiple communications in parallel. NoC-based platforms are also present in embedded systems, characterized by requirements like predictability, security and mixed-criticality. In order to enable such features in existing commercial platforms it is necessary to take into consideration the NoC which is a key element with an important impact to a SoC's performance. A task exchanges information through the NoC and as a result, its execution time depends on the transmission time of the flows it generates. By calculating the Worst Case Transmission Time (WCTT) of flows in the NoC, a step is made towards the calculation of the Worst Case Execution Time (WCET) of a task. This contributes to the overall predictability of the system. Similarly by leveraging arbitration and traffic policies in the NoC it is possible to provide security guarantees against compromised tasks that might try to saturate the system's resources (DoS attack). In safety critical systems, a distinction of tasks in relation to their criticality level, allows tasks of mixed criticality to co-exist and execute in harmony. In addtition, it allows critical tasks to maintain their execution times at the cost of tasks of lower criticality that will be either slowed down or stopped. This thesis aims to provide methods and mechanisms with the objective to contribute in the axes of predictability, security and mixed criticality in NoC-based Manycore architectures. In addition, the incentive is to jointly address the challenges in these three axes taking into account their mutual impact. Each axis has been researched individually, but very little research takes under consideration their interdependence. This fusion of aspects is becoming more and more intrinsic in fields like the Internet-of-Things, Cyber-Physical Systems (CPSs), connected and autonomous vehicles which are gaining momentum. The reason being their high degree of connectivity which is creates great exposure as well as their increasing presence which makes attacks severe and visible. The contributions of this thesis consist of a method to provide predictability to a set of flows in the NoC, a mechanism to provide security properties to the NoC and a toolkit for traffic generation used for benchmarking. The first contribution is an adaptation of the trajectory approach traditionally used in avionics networks (AFDX) to calculate WCET. In this thesis, we identify the differences and similarities in NoC architecture and modify the trajectory approach in order to calculate the WCTT of NoC flows. The second contribution is a mechanism that detects DoS attacks and mitigates their impact in a mixed criticality set of flows. More specifically, a monitor mechanism will detect abnormal behavior, and activate a mitigation mechanism. The latter, will apply traffic shaping at the source and restrict the rate at which the NoC is occupied. This will limit the impact of the attack, guaranteeing resource availability for high criticality tasks. Finally NTGEN, is a toolkit that can automatically generate random sets of flows that result to a predetermined NoC occupancy. These sets are then injected in the NoC and information is collected related to latency
125

Criticalidade auto-organizada no modelo olami-feder-christensen / Criticalidade auto-organizada no modelo Olami-Feder-Christensen.

Carvalho, Josué Xavier de 22 March 2002 (has links)
Neste trabalho estudamos o modelo Olami-Feder-Christensen (OFC). Fortes correlações espaciais e temporais dificultam a obtenção de resultados analíticos para este modelo. Assim, nossas investigações foram realizadas através de simulações computacionais. A fim de identificar o regime estacionário de forma eficiente e econômica desenvolvemos algumas estatrégias. Também percebemos que a escolha adequada da configuração inicial pode antecipar ou retardar o início do regime estacionário. Por fim, a criticalidade do modelo foi estudada através de uma abordagem totalmente nova. Em vez de tentarmos identificar o comportamento crítico do sistema por meio da distribuição de avalanches, definimos uma grandeza , que em um processo ramificado simples seria a taxa de ramificação do sistema. Analisando o comportamento dessa variável em um espaço de fases verificamos que o modelo OFC e sua versão aleatória (que de antemão sabemos que só apresenta criticalidade no regime conservativo) tem um comportamento bastante similar. Obtivemos, ao contrário do que se acreditava, fortes evidências de que o modelo OFC apenas exibe criticalidade no regime conservativo. / We have investigated the Olami-Feder-Christensen model. The model presents strong temporal and spatial correlations what makes it very difficult to perform analytical calculations. So our treatment was numerical. We developed strategies to identify the regime with high level of accuracy. We noticed that depending on the initial configurations, the statistical stationary state can be reached faster. Finally we have investigated the criticality of the model through new strategy. Instead of looking for powers laws, we defined a quantity , very similar to the branching ratio in a simple branching process. We were able to show the behavior of the Olami-Feder-Christensen and the random version of this model are similar. We got strong numerical evidences that, in opposition to previous results, the Olami-Fedel-Christensen model is critical only in the conservative regime.
126

Exploitation des informations de traçabilité pour l'optimisation des choix en production et en logistique / Exploiting traceability information in order to optimize production and logistic choices

Tamayo Giraldo, Simon 05 December 2011 (has links)
Dans le cours des dernières années, la traçabilité s’est positionnée au cœur de plusieurs enjeux fondamentaux pour les entreprises. Cependant, cette notion est encore aujourd’hui vue comme une contrainte, servant uniquement à respecter des impositions légales et à rappeler des produits non-conformes. Dans ce projet, nous nous sommes attachés à élargir la définition de traçabilité aux domaines de la prévision et de la protection, pour qu’elle ne soit plus perçue comme une obligation supplémentaire à assumer, mais comme un véritable argument d’avantage concurrentiel. Ces travaux de recherche sont consacrés à l’exploitation des informations de traçabilité par l’utilisation des techniques d’intelligence artificielle et de recherche opérationnelle, afin de proposer des actions d’amélioration en production et en logistique. Ils ont été menés en collaboration avec la société ADENTS International, experte en traçabilité. Ce projet est composé de deux principaux axes de travail : l’un portant sur le diagnostic de la criticité d’une production, en fonction des informations de traçabilité et l’autre sur les actions à entreprendre par rapport à ce diagnostic. Dans le premier, nous remarquons l’importance de la notion de dispersion de matières premières et des composants, ainsi que celle des écarts en termes de qualité et de sécurité. Dans le second, nous nous intéressons d’avantage à la notion de rappel de produits, visant une gestion de transformations adaptée en aval de la production, afin de minimiser ces rappels. Pour la mise en place de ces deux grandes activités, nous nous sommes engagés à proposer des modèles et des méthodes flexibles et réactives, pouvant s’adapter à la versatilité ontologique des flux d’informations de traçabilité / The recent product traceability requirements demonstrate an industrial need to improve the information management strategies within traceability systems in order to evolve from reactivity to proactivity. The aim of this work is to exploit the recently available real-time access to traceability information. We propose the utilization of artificial intelligence and operational research techniques to analyse the information and therefore suggest improvement actions. This research project is composed of two main activities: first, the diagnosis of the criticality value associated to a production regarding the traceability information and second, the actions to undertake as a result of this diagnosis. One of the issues studied in this thesis is the problem of minimizing the size of products recall. Initially the problem of raw materials dispersion minimization is analysed. Then a result of the dispersion rate along with other production criteria are evaluated in order to determine a risk level criterion in terms of quality and security that we name “production criticality”. This criterion is used subsequently to optimize deliveries dispatch with the purpose of minimizing the number of batch recalls in case of crisis. This is achieved by implementing flexible and reactive tools
127

The Structural Integrity And Damage Tolerance Of Composite T-Joints in Naval Vessels

Dharmawan, Ferry, ferry.dharmawan@rmit.edu.au January 2008 (has links)
In this thesis, the application of composite materials for marine structures and specifically naval vessels has been explored by investigating its damage criticality. The use of composite materials for Mine Counter Measure Vessels (MCMVs) was desirable, especially for producing material characteristics, such as light weight, corrosion resistance, design flexibility due to its anisotropic nature and most importantly stealth capability. The T-Joint structure, as the primary connection between the hull and bulkhead forms the focus of this research. The aim of the research was to determine the methodology to predict the damage criticality of the T-Joint under a pull-off tensile loading using FE (Finite Element) based fracture mechanics theory. The outcome of the research was that the Finite Element (FE) simulations were used in conjunction with fracture mechanics theory to determine the failure mechanism of the T-Joint in the presence of disbonds in the critical loca tion. It enables certain pre-emptive strengthening mechanisms or other preventive solutions to be made since the T-Joint responses can be predicted precisely. This knowledge contributes to the damage tolerance design methodology for ship structures, particularly in the T-Joint design. The results comparison between the VCCT (Virtual Crack Closure Technique) analysis and the experiment results showed that the VCCT is a dependable analytical method to predict the T-Joint failure mechanisms. It was capable of accurately determining the crack initiation and final fracture load. The maximum difference between the VCCT analysis with the experiment results was approximately 25% for the T-Joint with a horizontal disbond. However, the application of the CTE (Crack Tip Element) method for the T-Joint displayed a huge discrepancy compared with the results (fracture toughness) obtained using the VCCT method, because the current T-Joint structure geometry did not meet the Classical Laminate Plate Theory (CLPT) criteria. The minimum fracture toughness difference for both analytical methods was approximately 50%. However, it also has been tested that when the T-Joint structure geometry satisfied the CLPT criteria, the maximum fracture toughness discrepancy between both analytical methods was only approximately 10%. It was later discovered from the Griffith energy principle that the fracture toughness differences between both analytical methods were due to the material compliance difference as both analytical methods used different T-Joint structures.
128

Development of New Monte Carlo Methods in Reactor Physics : Criticality, Non-Linear Steady-State and Burnup Problems

Dufek, Jan January 2009 (has links)
The Monte Carlo method is, practically, the only approach capable of giving detail insight into complex neutron transport problems. In reactor physics, the method has been used mainly for determining the keff in criticality calculations. In the last decade, the continuously growing computer performance has allowed to apply the Monte Carlo method also on simple burnup simulations of nuclear systems. Nevertheless, due to its extensive computational demands the Monte Carlo method is still not used as commonly as deterministic methods. One of the reasons for the large computational demands of Monte Carlo criticality calculations is the necessity to carry out a number of inactive cycles to converge the fission source. This thesis presents a new concept of fission matrix based Monte Carlo criticality calculations where inactive cycles are not required. It is shown that the fission matrix is not sensitive to the errors in the fission source, and can be thus calculated by a Monte Carlo calculation without inactive cycles. All required results, including keff, are then derived via the final fission matrix. The confidence interval for the estimated keff can be conservatively derived from the variance in the fission matrix. This was confirmed by numerical test calculations of Whitesides's ``keff of the world problem'' model where other Monte Carlo methods fail to estimate the confidence interval correctly unless a large number of inactive cycles is simulated.   Another problem is that the existing Monte Carlo criticality codes are not well shaped for parallel computations; they cannot fully utilise the processing power of modern multi-processor computers and computer clusters. This thesis presents a new parallel computing scheme for Monte Carlo criticality calculations based on the fission matrix. The fission matrix is combined over a number of independent parallel simulations, and the final results are derived by means of the fission matrix. This scheme allows for a practically ideal parallel scaling since no communication among the parallel simulations is required, and no inactive cycles need to be simulated.   When the Monte Carlo criticality calculations are sufficiently fast, they will be more commonly applied on complex reactor physics problems, like non-linear steady-state calculations and fuel cycle calculations. This thesis develops an efficient method that introduces thermal-hydraulic and other feedbacks into the numerical model of a power reactor, allowing to carry out a non-linear Monte Carlo analysis of the reactor with steady-state core conditions. The thesis also shows that the major existing Monte Carlo burnup codes use unstable algorithms for coupling the neutronic and burnup calculations; therefore, they cannot be used for fuel cycle calculations. Nevertheless, stable coupling algorithms are known and can be implemented into the future Monte Carlo burnup codes. / QC 20100709
129

Application of sandwich structure analysis in predicting critical flow velocity for a laminated flat plate

Jensen, Philip (Philip J.) 08 March 2013 (has links)
The Oregon State University (OSU), Hydro Mechanical Fuel test Facility (HMFTF) is designed to hydro-mechanically test prototypical plate type fuel. OSU's fuel test program is a part of the Global Threat Reduction Initiative (GTRI), formerly known as the Reduced Enrichment for Research and Test Reactor program. One of the GTRI's goals is to convert all civilian research, and test reactors in the United State from highly enriched uranium (HEU) to a low enriched uranium (LEU) fuel in an effort to reduce nuclear proliferation. An analytical model has been developed and is described in detail which complements the experimental work being performed by the OSU HMFTF, and advances the science of hydro-mechanics. This study investigates two methods for determining the critical flow velocity for a pair of laminated plates. The objective is accomplished by incorporating a flexural rigidity term into the formulation of critical flow velocity originally derived by Miller, and employing sandwich structure theory to determine the rigidity term. The final outcome of this study results in the developing of a single equation for each of three different edge boundary conditions which reliably and comprehensively predicts the onset of plate collapse. The two models developed and presented, are termed the monocoque analogy and the ideal laminate model. / Graduation date: 2013
130

ASPIRE: Adaptive Service Provider Infrastructure for VANETs

Koulakezian, Agop 25 August 2011 (has links)
User desire for ubiquitous applications on-board a vehicle motivates the necessity for Network Mobility (NEMO) solutions for Vehicular Ad-Hoc Networks (VANETs). Due to the dynamic topology of VANETs, this approach incurs excessive infrastructure cost to maintain stable connectivity and support these applications. Our solution to this problem is focused on a novel NEMO-based Network Architecture where vehicles are the main network infrastructure. Within this Architecture, we present a Network Criticality-based clustering algorithm, which adapts to mobility changes to form stable self-organizing clusters of vehicles and dynamically builds on vehicle clusters to form more stable Mobile Networks. Simulation results show that the proposed method provides more stable clusters, lower handoffs and better connectivity compared to popular density-based vehicle clustering methods. In addition, they confirm the validity of the proposed Network Architecture. The proposed method is also robust to channel error and exhibits better performance when the heterogeneity of vehicles is exploited.

Page generated in 0.0569 seconds