• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 102
  • 54
  • 20
  • 7
  • 5
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 239
  • 77
  • 31
  • 28
  • 28
  • 26
  • 26
  • 25
  • 22
  • 21
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

[en] FRAMEWORK FOR IMPLEMENTATION OF AUTONOMOUS MAINTENANCE WITH THE HUMAN, TECHNOLOGICAL AND ORGANIZATIONAL (HTO) APPROACH / [pt] FRAMEWORK PARA IMPLEMENTAÇÃO DA MANUTENÇÃO AUTÔNOMA COM A ABORDAGEM DAS DIMENSÕES HUMANA, TECNOLÓGICA E ORGANIZACIONAL (HTO)

PAULO CEZAR LOURES 30 April 2021 (has links)
[pt] A Manutenção Autônoma (MA) é parte de uma estratégia de manutenção que se centra na relação homem-máquina para efetuar de forma eficaz a limpeza, lubrificação, inspeção e pequenos reparos. Quando devidamente implementada, a MA pode melhorar significativamente a produtividade e a qualidade, bem como reduzir os custos, portanto é uma importante área da Gerência de Operações (OM). No entanto, a indústria tem sido desafiada com inúmeras barreiras para implementar com sucesso a MA, e o meio acadêmico pouco tem feito para ajudar a indústria a este respeito. Esta dissertação aborda essa atual lacuna entre pesquisa e prática, tendo como objetivo principal desenvolver um framework para a implementação da MA com as dimensões Humana, Tecnológica e Organizacional (HTO). O trabalho baseia-se numa pesquisa-ação, conduzida no âmbito de um estudo longitudinal, num processo de laminação de tiras a quente de uma usina siderúrgica. A adoção da abordagem HTO por pesquisadores da OM tem sido conduzida com sucesso em diferentes casos e está bem documentada na literatura. No entanto, o autor sugere que esta é a primeira pesquisa a utilizar esta abordagem dentro da MA. Os resultados da pesquisa indicam a aderência das dimensões HTO para corresponder aos desafios da implementação da MA e reforçar a necessidade de uma perspectiva holística e combinada destas dimensões para o desenvolvimento empresarial, resultando em ganhos de natureza quantitativa e qualitativa e que quatorze lições foram aprendidas com implicações práticas imediatas. Os gestores da indústria podem fazer um balanço das lições aprendidas no âmbito desta pesquisa-ação e utilizar o framework oferecido para ajudar à implementação com sucesso da MA nas suas operações industriais. / [en] Autonomous Maintenance (AM) is part of a maintenance strategy that focuses on the man-machine relationship to effectively carry out cleaning, lubrication, inspection and minor repairs. When properly implemented, AM can significantly improve productivity and quality, as well as reduce cost, therefore, it is an important area of operations management (OM). However, the industry has been challenged with numerous barriers to successfully implement AM and the academy has done little to help industry in this regard. This paper addresses this current research practice gap by offering as its main goal a framework for implementation of AM with the Human, Technological and Organizational (HTO) dimensions. It builds upon an action-research conducted within a longitudinal study in a rolling mill production process. The adoption of the HTO approach by OM scholars has been successfully conducted in different cases and is well documented in the literature. However, the author suggests that this is the first research to use this approach within AM. The research findings indicate the adherence of the HTO dimensions to match the AM implementation challenges and reinforce the need for a holistic and combined perspective of these dimensions for business development, resulting in gains of both a quantitative and qualitative nature and that fourteen lessons were learned with immediate practical implications. Practitioners can take stock of the lessons learnt within this action-research and the offered framework to aid the successful implementation of AM in their industrial operations.
202

[en] OPTIMIZATION OF THE MAINTENANCE SCHEDULE FOR THE TURBINES OF A HYDROELECTRIC PLANT / [pt] OTIMIZAÇÃO DA AGENDA DE MANUTENÇÃO DAS TURBINAS DE UMA USINA HIDRELÉTRICA

PATRICIA DE SOUSA OLIVEIRA 29 April 2021 (has links)
[pt] O planejamento do cronograma de manutenção de unidades geradoras de usinas hidrelétricas é assunto de grande interesse para diversos agentes do setor de energia brasileiro. Uma abordagem correta para este problema, pode prevenir a degradação dos ativos físicos e minimizar a probabilidade de desligamentos forçados de seus equipamentos. Logo, as estratégias operacionais dos agentes são também influenciadas pelo cronograma de manutenção das turbinas. Porém, o sistema brasileiro possui algumas particularidades, a qual os grupos econômicos que vencem o leilão para a construção de novas usinas hidrelétricas, atendam a uma série de especificações técnicas, dentre elas, o Fator de Disponibilidade (FID). O FID é diretamente influenciado pelas horas de manutenção realizadas na usina e, em caso de desempenho abaixo do estipulado nos contratos de concessão, pode acarretar em perdas financeiras aplicadas pela o agente regulador. Com essa motivação em mente, o presente trabalho propõe uma metodologia para Programação da Agenda da Manutenção de Geradores de uma usina hidrelétrica, desenvolvendo um modelo matemático para determinar o momento ideal para realizar a manutenção, considerando restrições operacionais e aspectos regulatórios de usinas hidrelétricas. O trabalho propõe também, a obtenção de um ranking das turbinas com a utilização do método Analytic Hierarchy Process (AHP), com base nos indicadores de manutenção da usina hidrelétrica estudada. O objetivo da classificação gerada é fornecer subsídio para auxiliar o planejamento das manutenções, de forma a aumentar a disponibilidade dos equipamentos. A metodologia de otimização proposta para este problema é feita através de Programação Linear Inteira Mista (PLIM), onde as variáveis inteiras consistem no estado de operação e data de início da manutenção de cada unidade geradora. A representação do parque hidrotérmico é desafiadora, principalmente, devido à estocasticidade das vazões naturais afluentes. Ao final, para validar a modelagem proposta, é realizado um estudo de caso para uma planta real de grande porte do sistema brasileiro, a Usina Hidrelétrica Santo Antônio. / [en] The planning of the maintenance schedule of hydroelectric power plant generating units is a matter of great interest to several agents in the Brazilian energy sector. A correct approach to this problem can prevent the degradation of physical assets and minimize the likelihood of forced shutdown of your equipment. Therefore, the agents operational operations are also influenced by the turbine maintenance schedule. However, the Brazilian system has some particularities, which the economic groups that win the auction for the construction of new hydroelectric plants, meet a series of technical specifications, among them, the Availability Factor (AFA). The AFA is directly influenced by the hours of maintenance performed at the plant and, in case of performance below the stipulated in the concession contracts, it can result in financial losses applied by the regulatory agent. With this motivation in mind, the present work proposes a methodology for Programming the Maintenance Schedule of Generators of a hydroelectric plant, developing a mathematical model to determine the ideal moment to carry out the maintenance, considering operational restrictions and regulatory aspects of hydroelectric plants. The work also offers to obtain a ranking of the turbines using the Analytic Hierarchy Process (AHP) method, based on the maintenance indicators of the studied hydroelectric plant. The purpose of the generated classification is to provide subsidies to assist the planning of maintenance, in order to increase the availability of equipment. The optimization methodology proposed for this problem is done through Mixed Integer Linear Programming (MILP), where the entire variables consist of the state of operation and start data for the maintenance of each generating unit. The representation of the hydrothermal park is challenging, mainly due to the stochasticity of the affluent natural flows. At the end, to validate the proposed modeling, a case study is carried out for a real large plant in the Brazilian system, the Santo Antônio Hydroelectric Plant.
203

Uncertainty Quantification Using Simulation-based and Simulation-free methods with Active Learning Approaches

Zhang, Chi January 2022 (has links)
No description available.
204

Dynamic Soil-Structure Interaction Analysis of Railway Bridges : Numerical and Experimental Results

Zangeneh Kamali, Abbas January 2018 (has links)
The work reported in this thesis presents a general overview of the dynamic response of short-span railway bridges considering soil-structure interaction. The study aims to identify the effect of the surrounding and underlying soil on the global stiffness and damping of the structural system. This may lead to better assumptions and more efficient numerical models for design.A simple discrete model for calculating the dynamic characteristics of the fundamental bending mode of single span beam bridges on viscoelastic supports was proposed. This model was used to study the effect of the dynamic stiffness of the foundation on the modal parameters (e.g. natural frequency and damping ratio) of railway beam bridges. It was shown that the variation in the underlying soil profiles leads to a different dynamic response of the system. This effect depends on the ratio between the flexural stiffness of the bridge and the dynamic stiffness of the foundation-soil system but also on the ratio between the resonant frequency of the soil layer and the fundamental frequency of the bridge. The effect of the surrounding soil conditions on the vertical dynamic response of portal frame bridges was also investigated both numerically and experimentally. To this end, different numerical models (i.e. full FE models and coupled FE-BE models) have been developed. Controlled vibration tests have been performed on two full-scale portal frame bridges to determine the modal properties of the bridge-soil system and calibrate the numerical models. Both experimental and numerical results identified the substantial contribution of the surrounding soil on the global damping of short-span portal frame bridges. A simplified model for the surrounding soil was also proposed in order to define a less complicated model appropriate for practical design purposes. / <p>QC 20180315</p>
205

Neue Ansätze zur Auswertung und Klassiffizierung von sehr hochauflösenden Daten / Methoden der Segmentierung, der hierarchischen Klassifizierung und der per-Parcel-Methode mit Daten der digitalen Kamera HRSC-A und ihre Anwendbarkeit für die Aktualisierung topographischer Karten

Hoffmann, Andrea 10 May 2001 (has links)
Auf dem Luftbildsektor vollziehen sich seit einigen Jahren grundsätzliche Veränderungen. Digitale flugzeuggetragene Kamerasysteme und hochauflösende Satellitensysteme bieten neue Potentiale der Datenakquise und -auswertung. Diese digitalen Datensätze werden in absehbarer Zeit das herkömmliche Luftbild ersetzen und Kartographie, Photogrammetrie und Fernerkundung erheblich verändern. Die neue Generation von digitalen Kameras wird zwei zentrale Bereiche der Kartographie einschneidend beeinflussen: Die Orthokartenherstellung und die Kartenaktualisierung. Der Bedarf aktueller Geobasisdaten macht Orthobilder besonders für Geoinformationssysteme interessant. Bisher standen als Basisdaten für Orthobildkarten großer Auflösung (> 1:10 000) lediglich Luftbilder zur Verfügung. Es wird gezeigt, daß die digitalen Daten der neuen Kamerageneration zur Erstellung von Orthobildkarten operationell einsetzbar sind. Durch die automatisierte Prozessierung werden sie den Anforderungen an schnelle aktuelle Kartenprodukte gerecht, mit ihrer hochgenauen Navigation bieten die digitalen Systeme die automatisierte Erstellung geometrisch sehr genauer Datensätze, die mit herkömmlichen Mitteln nur sehr aufwendig erreicht werden könnten. Ein Vergleich mit Luftbildern zeigt und bewertet die Unterschiede beider Aufnahmesysteme. Untersucht wurden Datensätze der digitalen Kamera HRSC-A des DLR Adlershof. Mit der HRSC-A (High Resolution Stereo Camera - Airborne) und der speziell für die Prozessierung dieser Daten entwickelten Software steht den Geoinformationsnutzern erstmals ein operationelles System zur Verfügung, das vollständig digital und vollautomatisch hochauflösende Orthobilddaten produziert. Die Pixelauflösung liegt zwischen 10 und 40 cm (Flughöhe von 2500 bis 10 000 m). Als vorteilhaft für die Analyse erweist sich die gleichzeitige Verfügbarkeit von hochauflösenden panchromatischen und multispektralen Datensätzen, die Verfügbarkeit eines hochauflösenden Geländemodells (x,y: 50 cm bzw. 1m, z: 10 cm) und die hohe Genauigkeit der Datensätze. Die Arbeit diskutiert die Problematik einer automatisierten Auswertung hochauflösender Daten. Diese Datensätze stellen neue Anforderungen an Auswertungsverfahren. Der Detailreichtum erschwert die Interpretation, gröbere räumliche Auflösungen glätten die Komplexität innerhalb heterogener Landnutzungen (besonders in urbanen Gebieten) und erleichtern so eine automatische Interpretation. Es wird gezeigt, daß "klassische" Auswertungsmethoden wie pixelbasierte Klassifizierungen (überwacht oder unüberwacht) zur Auswertung der hochauflösenden Daten nur bedingt geeignet sind. Im Rahmen der Arbeit werden zwei neue Ansätze entwickelt und untersucht, die nicht mehr pixelweise, sondern flächenhaft und objektorientiert arbeiten. Ein per-parcel-Ansatz zeigt gute Ergebnisse bei der Auswertung. Das Verfahren ermittelt zunächst mittels einer unüberwachten Klassifizierung Szenekomponenten in definierten Untereinheiten (parcel), die den Inhalt des Datensatzes repräsentieren. Die klassifizierten Pixel innerhalb der definierten parcel-Einheiten werden anschließend extrahiert und ihr Verhältnis zueinander weiter ausgewertet. Ergebnis ist zunächst die prozentuelle Verteilung der Szenekomponenten in den Einheiten, anschließend werden Zusammenhänge zwischen den vorhandenen Komponenten und der Landoberfläche definiert. Untersucht wurde ferner ein objektorientierter Ansatz, der die Interpretation von Einzelobjekten erlaubt. Hierbei wird das Bild in homogene Objekte segmentiert, die die Grundlage für die weitere Analyse bilden. Der diskutierte Ansatz besteht aus zwei Strategien: Mittels multiskalarer Segmentierung wird der Bilddatensatz zunächst in Einheiten strukturiert, verschiedene Maßstabsebenen sind gleichzeitig verfügbar. Grundidee ist die Schaffung eines hierarchischen Netzes von Bildobjekten. Diese gefundenen Einheiten werden anschließend spektral mittels Nearest Neighbour oder wissensbasiert mittels Fuzzy Logic Operatoren klassifiziert. Der Ansatz zeigt überzeugende Ergebnisse bei einer automatisierten Hauserkennung und der Aktualisierung bestehender Vektordatensätze. Die Einteilung der Bilddaten in Segmente, also zunächst eine Abstrahierung der Information vom Einzelpixel zu größeren semantischen Einheiten und die weitere Bearbeitung dieser Segmente erwies sich als sinnvoll. Es wurde ferner gezeigt, daß für die Analyse in städtischen Räumen die Einbeziehung von Oberflächeninformation unbedingt erforderlich ist. Durch die spektrale Ähnlichkeit von Bildelementen bietet die Einbeziehung des Oberflächenmodells die Möglichkeit, mittels einer zusätzlich bekannten Information über die Höhe der Objekte, diese Klassen zu trennen. / Remote sensing goes through times of fundamental changes. New digital airborne camera systems offer new potentials for data aquisition and interpretation. These data sets will replace aerial photography in the near future and change photogrammetry, cartography and remote sensing. The new camera generation will influence two central domains of cartography: Orthomap production and map updating. As a base for in-time updating orthomaps became more and more important. Up to now large scale mapping (scales > 1:10,000) is done nearly exclusively with aerial photographs. It can be shown that the digital data sets of the new camera generation can be used operationally for the production of orthomaps. A fully automated processing line provides the ortho images very shortly after aquisition, due to the used high precision navigation system the accuracy of the data is very high, even very big scales can be realized. A comparison of digital cameras and aerial photos discusses and rates the properties of the different aquisition systems and data sets. For interpretation data sets of the digital camera HRSC-A were used. The High Resolution Stereo Camera - Airborne (HRSC-A) digital photogrammetric camera and its processing software provides the geoinformation industry for the first time with an entirely digital and fully automatic process to produce highly accurate digital image data. The pixel size ranges between 10 and 40 cm (flight altitude 2500 - 10,000 m). The airborne camera combines high resolution, photogrammetric accuracy and all-digital acquisition and provides both multispectral and elevation information. The pushbroom instrument provides digital ortho-images and digital surface models with an accuracy of 10-20 cm. The use of this wide range of image information showed to be very helpful for data analysis. This investigation focuses on the problems of automated interpretation of high-resolution data. These data sets make high demands on automated interpretation procedures. The richness of details depicted in the data sets complicates the interpretation, coarser spatial resolutions smooth out spatial complexity within heterogeneous land cover types, such as urban, and make an automated interpretation easier. This report shows that conventional interpretation techniques like pixelbased classification (supervised or unsupervised) do not lead to satisfactory results. Two new object-oriented and region-oriented approaches for the interpretation of high resolution data sets were developped and discussed. The parcel-based approach showed good results in interpretation of the data. The proposed methodology begins with an unsupervised per-pixel classification to identify spectral clusters which represent the range of scene components present in the pre-defined land parcels. The per-parcel analysis extracts the pixels classified as scene components within the land parcel under examination and calculates the total numbers and fractions for each scene component present. To identify land cover types not represented by scene components at the land parcel level, it is necessary to process the scene component information and infer relationships between the scene components present and land cover type. A set of rules was devised to identify a range of land cover types from the mixtures of scene components found within each land parcel. Secondly an object-oriented and multi-scale image analysis approach was used for the interpretation of single objects. The procedure contains two basic domains. The strategy is to build up a hierarchical network of image objects which allows to represent the image information content at different resolutions (scales) simultaneously. In a second step the image objects were classified by means of fuzzy logic, either on features of objects and/or on relations between networked objects operating on the semantic network. The procedure showed very good results in detecting houses and updating vector data sets. Segmenting the data in semantic units and performing further analysis on these units showed to be very helpful for interpretation. It could be shown that for analysis of urban areas the use of a Digital Surface Model is necessary. Due to the spectral similarities of image elements the elevation information offers an important additional tool for analysis.
206

Mise à jour de la Base de Données Topographiques du Québec à l'aide d'images à très haute résolution spatiale et du progiciel Sigma0 : le cas des voies de communication

Bélanger, Jean 12 1900 (has links)
Le Ministère des Ressources Naturelles et de la Faune (MRNF) a mandaté la compagnie de géomatique SYNETIX inc. de Montréal et le laboratoire de télédétection de l’Université de Montréal dans le but de développer une application dédiée à la détection automatique et la mise à jour du réseau routier des cartes topographiques à l’échelle 1 : 20 000 à partir de l’imagerie optique à haute résolution spatiale. À cette fin, les mandataires ont entrepris l’adaptation du progiciel SIGMA0 qu’ils avaient conjointement développé pour la mise à jour cartographique à partir d’images satellitales de résolution d’environ 5 mètres. Le produit dérivé de SIGMA0 fut un module nommé SIGMA-ROUTES dont le principe de détection des routes repose sur le balayage d’un filtre le long des vecteurs routiers de la cartographie existante. Les réponses du filtre sur des images couleurs à très haute résolution d’une grande complexité radiométrique (photographies aériennes) conduisent à l’assignation d’étiquettes selon l’état intact, suspect, disparu ou nouveau aux segments routiers repérés. L’objectif général de ce projet est d’évaluer la justesse de l’assignation des statuts ou états en quantifiant le rendement sur la base des distances totales détectées en conformité avec la référence ainsi qu’en procédant à une analyse spatiale des incohérences. La séquence des essais cible d’abord l’effet de la résolution sur le taux de conformité et dans un second temps, les gains escomptés par une succession de traitements de rehaussement destinée à rendre ces images plus propices à l’extraction du réseau routier. La démarche globale implique d’abord la caractérisation d’un site d’essai dans la région de Sherbrooke comportant 40 km de routes de diverses catégories allant du sentier boisé au large collecteur sur une superficie de 2,8 km2. Une carte de vérité terrain des voies de communication nous a permis d’établir des données de référence issues d’une détection visuelle à laquelle sont confrontés les résultats de détection de SIGMA-ROUTES. Nos résultats confirment que la complexité radiométrique des images à haute résolution en milieu urbain bénéficie des prétraitements telles que la segmentation et la compensation d’histogramme uniformisant les surfaces routières. On constate aussi que les performances présentent une hypersensibilité aux variations de résolution alors que le passage entre nos trois résolutions (84, 168 et 210 cm) altère le taux de détection de pratiquement 15% sur les distances totales en concordance avec la référence et segmente spatialement de longs vecteurs intacts en plusieurs portions alternant entre les statuts intact, suspect et disparu. La détection des routes existantes en conformité avec la référence a atteint 78% avec notre plus efficace combinaison de résolution et de prétraitements d’images. Des problèmes chroniques de détection ont été repérés dont la présence de plusieurs segments sans assignation et ignorés du processus. Il y a aussi une surestimation de fausses détections assignées suspectes alors qu’elles devraient être identifiées intactes. Nous estimons, sur la base des mesures linéaires et des analyses spatiales des détections que l’assignation du statut intact devrait atteindre 90% de conformité avec la référence après divers ajustements à l’algorithme. La détection des nouvelles routes fut un échec sans égard à la résolution ou au rehaussement d’image. La recherche des nouveaux segments qui s’appuie sur le repérage de points potentiels de début de nouvelles routes en connexion avec les routes existantes génère un emballement de fausses détections navigant entre les entités non-routières. En lien avec ces incohérences, nous avons isolé de nombreuses fausses détections de nouvelles routes générées parallèlement aux routes préalablement assignées intactes. Finalement, nous suggérons une procédure mettant à profit certaines images rehaussées tout en intégrant l’intervention humaine à quelques phases charnières du processus. / In order to optimize and reduce the cost of road map updating, the Ministry of Natural Resources and Wildlife is considering exploiting high definition color aerial photography within a global automatic detection process. In that regard, Montreal based SYNETIX Inc, teamed with the University of Montreal Remote Sensing Laboratory (UMRSL) in the development of an application indented for the automatic detection of road networks on complex radiometric high definition imagery. This application named SIGMA-ROUTES is a derived module of a software called SIGMA0 earlier developed by the UMRSL for optic and radar imagery of 5 to 10 meter resolution. SIGMA-ROUTES road detections relies on a map guided filtering process that enables the filter to be driven along previously known road vectors and tagged them as intact, suspect or lost depending on the filtering responses. As for the new segments updating, the process first implies a detection of potential starting points for new roads within the filtering corridor of previously known road to which they should be connected. In that respect, it is a very challenging task to emulate the human visual filtering process and further distinguish potential starting points of new roads on complex radiometric high definition imagery. In this research, we intend to evaluate the application’s efficiency in terms of total linear distances of detected roads as well as the spatial location of inconsistencies on a 2.8 km2 test site containing 40 km of various road categories in a semi-urban environment. As specific objectives, we first intend to establish the impact of different resolutions of the input imagery and secondly establish the potential gains of enhanced images (segmented and others) in a preemptive approach of better matching the image property with the detection parameters. These results have been compared to a ground truth reference obtained by a conventional visual detection process on the bases of total linear distances and spatial location of detection. The best results with the most efficient combination of resolution and pre-processing have shown a 78% intact detection in accordance to the ground truth reference when applied to a segmented resample image. The impact of image resolution is clearly noted as a change from 84 cm to 210 cm resolution altered the total detected distances of intact roads of around 15%. We also found many roads segments ignored by the process and without detection status although they were directly liked to intact neighbours. By revising the algorithm and optimizing the image pre-processing, we estimate a 90% intact detection performance can be reached. The new segment detection is non conclusive as it generates an uncontrolled networks of false detections throughout other entities in the images. Related to these false detections of new roads, we were able to identify numerous cases of new road detections parallel to previously assigned intact road segments. We conclude with a proposed procedure that involves enhanced images as input combined with human interventions at critical level in order to optimize the final product.
207

Identification expérimentale du comportement d'un fuselage composite : détection de défauts par mesures de champs / Experimental identification of the behavior of a composite fuselage : defects detection by full field measurements

Peronnet, Élodie 04 October 2012 (has links)
Le contexte de ce travail concerne le process d'Infusion de Résine Liquide (LRI) développé dans le cadre du projet « FUSelage COMPosite » par DAHER SOCATA. Ce process de fabrication permet de réaliser des pièces de formes complexes et des panneaux entiers de fuselage en composites, ce qui réduit considérablement les étapes d'assemblages et donc les temps de production. Les travaux de thèse portent sur l'identification expérimentale du comportement d'un fuselage composite. Ce travail se divise en deux parties qui sont la qualification du contrôle non destructif (CND) par rapport à une taille de défaut critique et l'identification du comportement d'une structure composite orthotrope en présence de ce défaut. Le premier volet consiste à évaluer les techniques de CND basées sur des mesures de champs (acoustiques, thermiques et densimétriques), capables de détecter des défauts internes de types délaminage et porosité au sein de structures composites monolithiques et sandwichs, et fournissant une visualisation des résultats par une cartographie de défauts 2D ou 3D. Le choix de ces méthodes a été motivé par la volonté de DAHER SOCATA d'acquérir de nouvelles compétences en matière de CND. Le deuxième volet consiste à évaluer les paramètres élastiques d'une structure composite orthotrope (structure comprenant une zone saine et une zone localement dégradée) via une procédure d'identification, à partir de mesures de champs, globale et locale par recalage de modèles éléments finis. Cette procédure se décompose en quatre parties avec tout d'abord l'identification des propriétés de la structure saine, ensuite la localisation de la zone dégradée, l'intégration de celle-ci dans le modèle éléments finis, et l'identification des propriétés de cette dernière. / The context of this work concerns the process of Liquid Resin Infusion (LRI) developed under the project "composite fuselage" by DAHER SOCATA. This manufacturing process can produce parts with complex shapes and entire panels of composite fuselage, reducing assembly steps and therefore the production time. The thesis work focused on the experimental identification of the behavior of a composite fuselage. This work is divided into two parts which are the qualification of non destructive testing (NDT) compared to a critical defect size and the identification of the behavior of an orthotropic composite structure with defect. The first part is to evaluate the NDT techniques based on full field measurements (acoustic, thermal and densimetric), capable of detecting internal defects as porosities and delaminations in monolithic and sandwich composite structures, and providing a results visualization by a 2D or 3D defects map. The choice of these methods was motivated by DAHER SOCATA which wants to learn new NDT skills. The second part is to evaluate the elastic parameters of an orthotropic composite structure (structure composed by a virgin zone and a damaged zone) through an identification process from field measurements, by global and local step. This procedure is divided into four parts with two identification steps and a image processing step.
208

Who hacked my toaster? : A study about security management of the Internet of Things. / Vem har hackat min brödrost? : En studie om säkerhetshantering av Internet of Things

Hakkestad, Mårten, Rynningsjö, Simon January 2019 (has links)
The Internet of Things is a growing area with growing security concerns, new threat emerge almost everyday. Keeping up to date, monitor the network and devices and responding to compromised devices and networks are a hard and complex matters.  This bachelor’s thesis aims to discover how a IT-company can work with security management within the Internet of Things, this is done by looking into how a IT-company can work with updating, monitoring and responding within the Internet of Things, as well what challenges there are with working with this.  A qualitative research approach was used for this case study along with an interpretative perspective, as well as abductive reasoning. Interviews were performed with employees of a large IT-company based in Sweden, along with extensive document analysis.  Our bachelor’s thesis results in challenges with Security Management within the areas updating, monitoring and responding along with how our Case Company works with these security challenges. Largely these challenges can be summarized that everything is harder with the number of devices there are within the Internet of Things / Internet of Things eller Sakernas internet är ett växande område med en växande hotbild och nya hot uppkommer dagligen. Att hålla sig uppdaterad, övervaka nätverk och enheter samt att reagera på att enheter och nätverk blir hackade är en svår och komplicerad uppgift. Den här uppsatsen ämnar undersöka hur ett IT-företag kan arbeta med säkerhetshantering inom Internet of Things. Detta har gjorts genom att kolla utmaningar och säkerhetslösningar inom de tre områdena uppdatera, övervaka och reagera.  En kvalitativ forskningsmetod har använts i denna fallstudie tillsammans med ett tolkande synsätt och en abduktiv ansats. Vi har utfört intervjuer på ett stort IT-företag baserat i Sverige tillsammans med en utförlig dokumentanalys.  Resultatet av denna uppsats påvisar ett antal utmaningar inom säkerhetshanteringen inom områdena uppdatera, övervaka och reagera tillsammans med hur vårt fallföretag jobbar med att motarbeta dessa utmaningar. I stort sett kan utmaningarna sammanfattas till att allting är svårare när mängden enheten är så hög som den är inom Internet of Things.
209

Contrôle d'accès efficace pour des données XML : problèmes d'interrogation et de mise-à-jour / Efficient Access Control to XML Data : Querying and Updating Problems

Mahfoud, Houari 18 February 2014 (has links)
Le langage XML est devenu un standard de représentation et d'échange de données à travers le web. Le but de la réplication de données au sein de différents sites est de minimiser le temps d'accès à ces données partagées. Cependant, différents problèmes sont liés à la sécurisation de ces données. Le but de cette thèse est de proposer des modèles de contrôles d'accès XML qui prennent en compte les droits de lecture et de mise-à-jour et qui permettent de surmonter les limites des modèles qui existent. Nous considérons les langages XPath et XQuery Update Facility pour la formalisation des requêtes d'accès et des requêtes de mise-à-jour respectivement. Nous donnons des descriptions formelles de nos modèles de contrôles d'accès et nous présentons des algorithmes efficaces pour le renforcement des politiques de sécurité spécifiées à la base de ces modèles. L'autre partie de cette thèse est consacrée à l'étude pratique de nos propositions. Nous présentons notre système appelé SVMAX qui met en oeuvre nos solutions, et nous conduisons une étude expérimentale basée sur une DTD réelle pour montrer son efficacité. Plusieurs systèmes de bases de données natives (systèmes de BDNs) ont été proposés récemment qui permettent une manipulation efficace des données XML en utilisant la plupart des standards du W3C. Nous montrons que notre système SVMAX peut être intégré facilement et efficacement au sein d'un large ensemble de systèmes de BDNs. A nos connaissances, SVMAX est le premier système qui permet la sécurisation des données XML conformes à des DTDs arbitraires (récursives ou non) et ceci en moyennant un fragment significatif de XPath et une classe riche d'opérations de mise-à-jour XML / XML has become a standard for representation and exchange of data across the web. Replication of data within different sites is used to increase the availability of data by minimizing the access's time to the shared data. However, the safety of the shared data remains an important issue. The aim of the thesis is to propose some models of XML access control that take into account both read and update rights and that overcome limitations of existing models. We consider the XPath language and the XQuery Update Facility to formalize respectively user access queries and user update operations. We give formal descriptions of our read and update access control models and we present efficient algorithms to enforce policies that can be specified using these models. Detailed proofs are given that show the correctness of our proposals. The last part of this thesis studies the practicality of our proposals. Firstly, we present our system, called SVMAX, that implements our solutions and we conduct an extensive experimental study, based on real-life DTD, to show that it scales well. Many native XML databases systems (NXD systems) have been proposed recently that are aware of the XML data structure and provide efficient manipulation of XML data by the use of most of W3C standards. Finally, we show that our system can be integrated easily and efficiently within a large set of NXD systems, namely BaseX, Sedna and eXist-db. To the best of our knowledge, SVMAX is the first system for securing XML data in the presence of arbitrary DTDs (recursive or not), a significant fragment of XPath and a rich class of XML update operations
210

"Metodologias para geração e atualização de mosaicos de fotos aéreas no Projeto ARARA" / Methodologies for generation and updating of aerial photographs mosaics in the ARARA Project

Santos, Rodrigo Borges dos 17 August 2004 (has links)
A produção de mosaicos fotográficos é uma atividade de apoio bastante importante em diversas áreas tais como a geração de mapas, o monitoramento ambiental e o gerenciamento agrícola. A fotogrametria, e em especial a aerofotogrametria, é a ciência que trata, entre outros tópicos, da geração de mosaicos através de procedimentos trabalhosos, o que torna sua manutenção uma tarefa difícil e de alto custo. O Projeto ARARA (Aeronaves de Reconhecimento Assistidas por Rádio e Autônomas) representa uma alternativa de baixo custo para a aquisição de fotografias aéreas. Câmeras digitais de pequeno formato a bordo das aeronaves permitem a obtenção automática das fotografias necessárias para a geração de mosaicos. Este trabalho propõe uma metodologia para a geração e a atualização de mosaicos compostos por fotografias aéreas oblíquas digitais e de pequeno formato, adquiridas com o auxílio das aeronaves do Projeto ARARA. As fotografias podem ser submetidas a procedimentos que associam técnicas de ortoretificação e processamento digital de imagens para corrigir suas distorções geométricas e radiométricas. A metodologia apresentada neste trabalho procura evitar a necessidade de pontos de controle no solo e focaliza a geração automática ou semi-automática dos mosaicos. Procedimentos automáticos têm o potencial de permitir a utilização de uma grande quantidade de fotografias de pequeno formato em substituição às fotografias normalmente utilizadas pela aerofotogrametria convencional. / The generation of photographic mosaics is an important activity in many areas such as map production, environment monitoring and agricultural management. Photogrammetry, and specially aero-photogrammetry, are the sciences that deal, among other subjects, with mosaic generation using time consuming procedures, making the maintenance and updating of photographic mosaics a difficult and high-cost task. The ARARA Project (Autonomous and Radio Assisted Reconaissance Aircrafts), presents a low cost alternative to acquire aerial photographs. An onboard, small format digital camera can take automatically the photographs used for the mosaic generation. This work proposes a methodology for mosaic generation and updating using oblique, digital, small format aerial photographs taken by ARARA aircraft. Photographs can be corrected both geometrically and radiometrically by orthorectification and digital image processing procedures. The methodology presented in this work avoids the use of ground control points, focusing on the automatic and semi-automatic mosaic generation. An automatic procedure make possible the use of a large number of small format photographs to replace the photographs normally used in conventional aerophotogrammetry.

Page generated in 0.0329 seconds