• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 46
  • 26
  • 18
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 317
  • 59
  • 55
  • 52
  • 45
  • 44
  • 43
  • 39
  • 36
  • 30
  • 28
  • 28
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Det perifera seendets betydelse bland gångtrafikanter i naturlig miljö / Peripheral vision and its importance amongst pedestrians in a natural setting

Björnqvist, Anton January 2019 (has links)
The importance, and the role, of peripheral vision amongst pedestrians, is an area which for a long time has remained unexplored. Previous studies regarding peripheral vision and pedestrians have mostly studied the characteristics of peripheral vision, the general visual behaviours amongst pedestrians and whether people affected by a natural loss of peripheral vision fixate on different objects compared to those with normal vision. To examine the role of peripheral vision amongst pedestrians, an experiment consisting of 20 participants was conducted. The experiment took place in a car park, where head movements (i.e. how many times each participant moved their head) and head directions (i.e. in which direction the participants’ moved their heads) of the participants were recorded using three action cameras. Two of the cameras were mounted on a helmet which the participants used during the experiment. The third camera was in the hands of the experimenter, recording the participants from behind. The experiment consisted of four different conditions. Two conditions where the participants’ peripheral vision was blocked to different extents, one with no manipulation of the visual field and one where the participants were told to watch a video on a cell phone during the walk. The results demonstrated a significant difference in the number of head movements between all four conditions. Furthermore, the results also demonstrated a significant difference in the relative frequency of downwards head directions between the first three conditions. After the experiment, the participants answered a short survey which included questions related to each condition. The answers from the survey showcased, amongst other things, that the participants thought that the condition where their peripheral vision was blocked to the largest extent was the most difficult one. A thematic analysis was conducted based on the recordings of a think-aloud-protocol which the participants were told to conduct during the experiment. The thematic analysis demonstrated, amongst other things, that the participants thought that the condition performed with no manipulation of the visual field was easy, that they felt insecure when their peripheral vision was blocked and therefore had to increase the number of head movements, and that they sometimes felt the need to redirect their gaze away from the cell phone during that condition. The conclusion which can be drawn based on the results is that the peripheral vision is widely used amongst pedestrians in natural settings, which in part is based on the fact that the participants increased their number of head movements when their peripheral vision was limited and by their own expressed thoughts regarding the different conditions. However, the results are not able to explain exactly how the peripheral vision is used amongst pedestrians. / Det perifera seendets betydelse och roll bland gångtrafikanter i naturlig miljö är ett område som till stora delar stått outforskat. Tidigare studier har främst fokuserat på uppmätning av periferins egenskaper, gångtrafikanters allmänna visuella beteenden samt studerandet av personer drabbade av naturligt synfältsbortfall. I syfte att undersöka det perifera seendets betydelse bland gångtrafikanter genomfördes ett experiment med 20 deltagare på en parkeringsplats, där huvudrörelser (d.v.s. hur många gånger varje deltagare rörde på huvudet) samt huvudriktningar (d.v.s. i vilken riktning deltagarna rörde huvudet) uppmättes med hjälp av två actionkameror fästa på en hjälm, samt en i handen på försöksledaren som filmade deltagarna bakifrån. Experimentet bestod av fyra olika betingelser, där två av dem blockerade det perifera synfältet olika mycket, en under normala synförhållanden samt en där deltagarna fäste blicken på en mobiltelefon under gången. Resultatet visade en signifikant skillnad i antalet huvudrörelser mellan samtliga betingelser. Utöver detta visades en signifikant skillnad i frekvensen av huvudrörelser nedåt vid en jämförelse mellan de tre förstnämnda betingelserna. Efter utfört experiment fick deltagarna dessutom svara på en enkät, vilken bland annat visade att deltagarna själva skattade att betingelsen där deras perifera seende blockerades som mest var svårast att genomföra. En tematisk analys genomfördes även baserat på data från ett tänka-högt-protokoll som deltagarna var uppmanade att föra under experimentets gång. Detta visade bland annat att deltagarna kände att det var enkelt att gå vid normala synförhållanden, att det fanns osäkerhetskänslor och behov av huvudrörelser vid betingelserna där periferin blockerades, samt att många kände ett behov av att lyfta på blicken vid mobiltelefonbetingelsen. Slutsatsen som kan dras baserat på resultaten är att det perifera seendet används mycket bland gångtrafikanter i naturlig miljö, vilket dels grundar sig i att deltagarna rörde som mest på huvudet när deras perifera seende blockerades, samt deras egna yttrade tankar. Det som inte kunnat besvaras är exakt hur det perifera seendet används bland gångtrafikanter.
282

Novos efeitos de real concretizados pelas máquinas de visibilidade: reconfigurações no telejornalismo perante a ubiquidade das câmeras onipresentes e oniscientes / New effects of reality achieved by the machines of visibility: reconfigurations in TV journalism before the ubiquity of omnipresent and omniscient cameras.

Martins, Maura Oliveira 26 February 2016 (has links)
Tendo em vista um cenário em que os dispositivos de registro do real adquirem onipresença na vida cotidiana, o jornalismo se encontra em um período de readequação de suas estratégias narrativas e de seu modus operandi. A presente tese procura investigar as reconfigurações no telejornalismo em razão da ubiquidade de câmeras, que capturam registros produzidos tanto pelas mídias quanto por instâncias externas a elas, e que oferecem aos veículos jornalísticos um material inesgotável e irrecusável, visto estar cercado de uma expectativa de autenticidade. Propõe-se então uma categorização às câmeras, sistematizadas como câmeras oniscientes e onipresentes, de modo a nos aproximarmos à especificidade do fenômeno. Em comum, todas as câmeras apontam à busca de uma estética realista, baseada no reconhecimento de uma baixa interferência midiática. Desse modo, o que se observa é o emprego de estratégias narrativas e estéticas para que o telejornalismo possa se apropriar destes conteúdos gerados por estas máquinas de visibilidade, que trazem às mídias algo que ficaria anteriormente restrito aos bastidores, operando também com sintoma da desfronteirização entre o público e o privado. A partir deste percurso metodológico, intenta-se por fim compreender de que forma estes dispositivos são utilizados para a concretização de novos efeitos de realismo ao jornalismo. / Considering a scenario where the technologic devices that visually register the world acquire omnipresence in everyday life, journalism is in a period of readjustment of its narrative strategies and its modus operandi. This research intents to investigate the changes in TV journalism because of the ubiquity of cameras, which capture images produced both by the media and by external institutions, since they offer to the journalistic enterprises an inexhaustible and irresistible material, because it is surrounded by an expectation of authenticity. We propose then a categorization of these machines, which are systematized as omniscient and omnipresent cameras, for the purpose of understanding the specificity of the phenomenon. In common, all of these cameras point to the search for a realistic aesthetics, based on the recognition of a low media interference. Thus, it is observed that the TV stations use some strategies to adapt these contents in their narratives, which bring to the media something that would previously be restrict to the backstage area. In a sense, they operate as a symptom o the erosion of the boundaries between public and private. With this methodological course, we finally attempt to understand how these technologic devices are used to achieve effects of realism to journalism.
283

Interactive Environment For The Calibration And Visualization Of Multi-sensor Mobile Mapping Systems

Radhika Ravi (6843914) 16 October 2019 (has links)
<div>LiDAR units onboard airborne and terrestrial platforms have been established as a proven technology for the acquisition of dense point clouds for a wide range of applications, such as digital building model generation, transportation corridor monitoring, precision agriculture, and infrastructure monitoring. Furthermore, integrating such systems with one or more cameras would allow forward and backward projection between imagery and LiDAR data, thus facilitating several high-level data processing activities such as reliable feature extraction and colorization of point clouds. However, the attainment of the full 3D point positioning potential of such systems is contingent on an accurate calibration of the mobile mapping unit as a whole. </div><div> </div><div> This research aims at proposing a calibration procedure for terrestrial multi-unit LiDAR systems to directly estimate the mounting parameters relating several spinning multi-beam laser scanners to the onboard GNSS/INS unit in order to derive point clouds with high positional accuracy. To ensure the accuracy of the estimated mounting parameters, an optimal configuration of target primitives and drive-runs is determined by analyzing the potential impact of bias in mounting parameters of a LiDAR unit on the resultant point cloud for different orientations of target primitives and different drive-run scenarios. This impact is also verified experimentally by simulating a bias in each mounting parameter separately. Next, the optimal configuration is used within an experimental setup to evaluate the performance of the proposed calibration procedure. Then, this proposed multi-unit LiDAR system calibration strategy is extended for multi-LiDAR multi-camera systems in order to allow a simultaneous estimation of the mounting parameters relating the different laser scanners as well as cameras to the onboard GNSS/INS unit. Such a calibration improves the registration accuracy of point clouds derived from LiDAR data and imagery, along with their accuracy with respect to the ground truth. Finally, in order to qualitatively evaluate the calibration results for a generic mobile mapping system and allow the visualization of point clouds, imagery data, and their registration quality, an interface denoted as Image-LiDAR Interactive Visualization Environment (I-LIVE) is developed. Apart from its visualization functions (such as 3D point cloud manipulation and image display/navigation), I-LIVE mainly serves as a tool for the quality control of GNSS/INS-derived trajectory and LiDAR-camera system calibration. </div><div> </div><div> The proposed multi-sensor system calibration procedures are experimentally evaluated by calibrating several mobile mapping platforms with varying number of LiDAR units and cameras. For all cases, the system calibration is seen to attain accuracies better than the ones expected based on the specifications of the involved hardware components, i.e., the LiDAR units, cameras, and GNSS/INS units.</div>
284

Ground Plane Feature Detection in Mobile Vision-Aided Inertial Navigation

Panahandeh, Ghazaleh, Mohammadiha, Nasser, Jansson, Magnus January 2012 (has links)
In this paper, a method for determining ground plane features in a sequence of images captured by a mobile camera is presented. The hardware of the mobile system consists of a monocular camera that is mounted on an inertial measurement unit (IMU). An image processing procedure is proposed, first to extract image features and match them across consecutive image frames, and second to detect the ground plane features using a two-step algorithm. In the first step, the planar homography of the ground plane is constructed using an IMU-camera motion estimation approach. The obtained homography constraints are used to detect the most likely ground features in the sequence of images. To reject the remaining outliers, as the second step, a new plane normal vector computation approach is proposed. To obtain the normal vector of the ground plane, only three pairs of corresponding features are used for a general camera transformation. The normal-based computation approach generalizes the existing methods that are developed for specific camera transformations. Experimental results on real data validate the reliability of the proposed method. / <p>QC 20121107</p>
285

Increasing temporal, structural, and spectral resolution in images using exemplar-based priors

Holloway, Jason 16 September 2013 (has links)
In the past decade, camera manufacturers have offered smaller form factors, smaller pixel sizes (leading to higher resolution images), and faster processing chips to increase the performance of consumer cameras. However, these conventional approaches have failed to capitalize on the spatio-temporal redundancy inherent in images, nor have they adequately provided a solution for finding $3$D point correspondences for cameras sampling different bands of the visible spectrum. In this thesis, we pose the following question---given the repetitious nature of image patches, and appropriate camera architectures, can statistical models be used to increase temporal, structural, or spectral resolution? While many techniques have been suggested to tackle individual aspects of this question, the proposed solutions either require prohibitively expensive hardware modifications and/or require overly simplistic assumptions about the geometry of the scene. We propose a two-stage solution to facilitate image reconstruction; 1) design a linear camera system that optically encodes scene information and 2) recover full scene information using prior models learned from statistics of natural images. By leveraging the tendency of small regions to repeat throughout an image or video, we are able to learn prior models from patches pulled from exemplar images. The quality of this approach will be demonstrated for two application domains, using low-speed video cameras for high-speed video acquisition and multi-spectral fusion using an array of cameras. We also investigate a conventional approach for finding 3D correspondence that enables a generalized assorted array of cameras to operate in multiple modalities, including multi-spectral, high dynamic range, and polarization imaging of dynamic scenes.
286

Stereo vision and LIDAR based Dynamic Occupancy Grid mapping : Application to scenes analysis for Intelligent Vehicles

Li, You 03 December 2013 (has links) (PDF)
Intelligent vehicles require perception systems with high performances. Usually, perception system consists of multiple sensors, such as cameras, 2D/3D lidars or radars. The works presented in this Ph.D thesis concern several topics on cameras and lidar based perception for understanding dynamic scenes in urban environments. The works are composed of four parts.In the first part, a stereo vision based visual odometry is proposed by comparing several different approaches of image feature detection and feature points association. After a comprehensive comparison, a suitable feature detector and a feature points association approach is selected to achieve better performance of stereo visual odometry. In the second part, independent moving objects are detected and segmented by the results of visual odometry and U-disparity image. Then, spatial features are extracted by a kernel-PCA method and classifiers are trained based on these spatial features to recognize different types of common moving objects e.g. pedestrians, vehicles and cyclists. In the third part, an extrinsic calibration method between a 2D lidar and a stereoscopic system is proposed. This method solves the problem of extrinsic calibration by placing a common calibration chessboard in front of the stereoscopic system and 2D lidar, and by considering the geometric relationship between the cameras of the stereoscopic system. This calibration method integrates also sensor noise models and Mahalanobis distance optimization for more robustness. At last, dynamic occupancy grid mapping is proposed by 3D reconstruction of the environment, obtained from stereovision and Lidar data separately and then conjointly. An improved occupancy grid map is obtained by estimating the pitch angle between ground plane and the stereoscopic system. The moving object detection and recognition results (from the first and second parts) are incorporated into the occupancy grid map to augment the semantic meanings. All the proposed and developed methods are tested and evaluated with simulation and real data acquired by the experimental platform "intelligent vehicle SetCar" of IRTES-SET laboratory.
287

As câmeras cinematográficas nos anos 1950/1960 e o cinema brasileiro

Barbuto, Adriano Soriano 02 August 2010 (has links)
Made available in DSpace on 2016-06-02T20:23:11Z (GMT). No. of bitstreams: 1 3236.pdf: 10667295 bytes, checksum: 4577e702fe0c4a20dd47793beb779bec (MD5) Previous issue date: 2010-08-02 / The motion picture cameras have changed through the years. However, they have kept their main design which has not changed during this period. One of the goals of this text is to understand how that design was created, and also the changes it has suffered without losing its essence. Besides that, this text aims to understand how different cameras connected with the Brazilian cinema production in the fifties and sixties. A change on camera s use is noticeable in that period in Brazil. There is a change in appreciation from the more traditional cameras linked to the studio system to the European cameras developed in the thirties and forties, which were lighter and more portable. This issue coincides with a specific characteristic in Brazilian cinema at that time, when people started to believe in the independent cinema production as an answer to the studio system, which was the main thought until then. In order to show this entire context, it has been chosen the Vera Cruz and Cinema Novo, their movies and shootings, to confront and connect them to camera models and their relation to the mode of production / As câmeras cinematográficas passaram por mudanças ao longo dos anos. Porém, manteve um design que se perpetuou durante este período. Um dos objetivos do presente trabalho é entender como este design foi criado, e as variações pelas quais ele passou, sem perder a sua essência. Em paralelo a isso, entender como estas diferentes câmeras travaram relação com a produção do cinema brasileiro dos anos 1950 e 1960. É nesta época que se observa no país uma troca de postura em relação às câmeras. Passa-se de uma valorização das câmeras mais tradicionais, ligadas ao sistema de estúdio, à valorização das câmeras européias criadas no anos 1930 e 1940, que eram mais leves e portáteis. Isso coincide com um momento específico do cinema brasileiro, aquele em que se passa a crer numa solução de cinema independente como resposta ao cinema de estúdio, que era o pensamento majoritário até então. Para ilustrar todo este contexto, escolhemos a Vera Cruz e o Cinema Novo, seus filmes e filmagens, para relacioná-los e confrontá-los em relação aos tipos de câmeras e sua relação ao modo de produção.
288

3D Semantic SLAM of Indoor Environment with Single Depth Sensor / SLAM sémantique 3D de l'environnement intérieur avec capteur de profondeur simple

Ghorpade, Vijaya Kumar 20 December 2017 (has links)
Pour agir de manière autonome et intelligente dans un environnement, un robot mobile doit disposer de cartes. Une carte contient les informations spatiales sur l’environnement. La géométrie 3D ainsi connue par le robot est utilisée non seulement pour éviter la collision avec des obstacles, mais aussi pour se localiser et pour planifier des déplacements. Les robots de prochaine génération ont besoin de davantage de capacités que de simples cartographies et d’une localisation pour coexister avec nous. La quintessence du robot humanoïde de service devra disposer de la capacité de voir comme les humains, de reconnaître, classer, interpréter la scène et exécuter les tâches de manière quasi-anthropomorphique. Par conséquent, augmenter les caractéristiques des cartes du robot à l’aide d’attributs sémiologiques à la façon des humains, afin de préciser les types de pièces, d’objets et leur aménagement spatial, est considéré comme un plus pour la robotique d’industrie et de services à venir. Une carte sémantique enrichit une carte générale avec les informations sur les entités, les fonctionnalités ou les événements qui sont situés dans l’espace. Quelques approches ont été proposées pour résoudre le problème de la cartographie sémantique en exploitant des scanners lasers ou des capteurs de temps de vol RGB-D, mais ce sujet est encore dans sa phase naissante. Dans cette thèse, une tentative de reconstruction sémantisée d’environnement d’intérieur en utilisant une caméra temps de vol qui ne délivre que des informations de profondeur est proposée. Les caméras temps de vol ont modifié le domaine de l’imagerie tridimensionnelle discrète. Elles ont dépassé les scanners traditionnels en termes de rapidité d’acquisition des données, de simplicité fonctionnement et de prix. Ces capteurs de profondeur sont destinés à occuper plus d’importance dans les futures applications robotiques. Après un bref aperçu des approches les plus récentes pour résoudre le sujet de la cartographie sémantique, en particulier en environnement intérieur. Ensuite, la calibration de la caméra a été étudiée ainsi que la nature de ses bruits. La suppression du bruit dans les données issues du capteur est menée. L’acquisition d’une collection d’images de points 3D en environnement intérieur a été réalisée. La séquence d’images ainsi acquise a alimenté un algorithme de SLAM pour reconstruire l’environnement visité. La performance du système SLAM est évaluée à partir des poses estimées en utilisant une nouvelle métrique qui est basée sur la prise en compte du contexte. L’extraction des surfaces planes est réalisée sur la carte reconstruite à partir des nuages de points en utilisant la transformation de Hough. Une interprétation sémantique de l’environnement reconstruit est réalisée. L’annotation de la scène avec informations sémantiques se déroule sur deux niveaux : l’un effectue la détection de grandes surfaces planes et procède ensuite en les classant en tant que porte, mur ou plafond; l’autre niveau de sémantisation opère au niveau des objets et traite de la reconnaissance des objets dans une scène donnée. A partir de l’élaboration d’une signature de forme invariante à la pose et en passant par une phase d’apprentissage exploitant cette signature, une interprétation de la scène contenant des objets connus et inconnus, en présence ou non d’occultations, est obtenue. Les jeux de données ont été mis à la disposition du public de la recherche universitaire. / Intelligent autonomous actions in an ordinary environment by a mobile robot require maps. A map holds the spatial information about the environment and gives the 3D geometry of the surrounding of the robot to not only avoid collision with complex obstacles, but also selflocalization and for task planning. However, in the future, service and personal robots will prevail and need arises for the robot to interact with the environment in addition to localize and navigate. This interaction demands the next generation robots to understand, interpret its environment and perform tasks in human-centric form. A simple map of the environment is far from being sufficient for the robots to co-exist and assist humans in the future. Human beings effortlessly make map and interact with environment, and it is trivial task for them. However, for robots these frivolous tasks are complex conundrums. Layering the semantic information on regular geometric maps is the leap that helps an ordinary mobile robot to be a more intelligent autonomous system. A semantic map augments a general map with the information about entities, i.e., objects, functionalities, or events, that are located in the space. The inclusion of semantics in the map enhances the robot’s spatial knowledge representation and improves its performance in managing complex tasks and human interaction. Many approaches have been proposed to address the semantic SLAM problem with laser scanners and RGB-D time-of-flight sensors, but it is still in its nascent phase. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Time-of-flight cameras have dramatically changed the field of range imaging, and surpassed the traditional scanners in terms of rapid acquisition of data, simplicity and price. And it is believed that these depth sensors will be ubiquitous in future robotic applications. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Starting with a brief motivation in the first chapter for semantic stance in normal maps, the state-of-the-art methods are discussed in the second chapter. Before using the camera for data acquisition, the noise characteristics of it has been studied meticulously, and properly calibrated. The novel noise filtering algorithm developed in the process, helps to get clean data for better scan matching and SLAM. The quality of the SLAM process is evaluated using a context-based similarity score metric, which has been specifically designed for the type of acquisition parameters and the data which have been used. Abstracting semantic layer on the reconstructed point cloud from SLAM has been done in two stages. In large-scale higher-level semantic interpretation, the prominent surfaces in the indoor environment are extracted and recognized, they include surfaces like walls, door, ceiling, clutter. However, in indoor single scene object-level semantic interpretation, a single 2.5D scene from the camera is parsed and the objects, surfaces are recognized. The object recognition is achieved using a novel shape signature based on probability distribution of 3D keypoints that are most stable and repeatable. The classification of prominent surfaces and single scene semantic interpretation is done using supervised machine learning and deep learning systems. To this end, the object dataset and SLAM data are also made publicly available for academic research.
289

Novos efeitos de real concretizados pelas máquinas de visibilidade: reconfigurações no telejornalismo perante a ubiquidade das câmeras onipresentes e oniscientes / New effects of reality achieved by the machines of visibility: reconfigurations in TV journalism before the ubiquity of omnipresent and omniscient cameras.

Maura Oliveira Martins 26 February 2016 (has links)
Tendo em vista um cenário em que os dispositivos de registro do real adquirem onipresença na vida cotidiana, o jornalismo se encontra em um período de readequação de suas estratégias narrativas e de seu modus operandi. A presente tese procura investigar as reconfigurações no telejornalismo em razão da ubiquidade de câmeras, que capturam registros produzidos tanto pelas mídias quanto por instâncias externas a elas, e que oferecem aos veículos jornalísticos um material inesgotável e irrecusável, visto estar cercado de uma expectativa de autenticidade. Propõe-se então uma categorização às câmeras, sistematizadas como câmeras oniscientes e onipresentes, de modo a nos aproximarmos à especificidade do fenômeno. Em comum, todas as câmeras apontam à busca de uma estética realista, baseada no reconhecimento de uma baixa interferência midiática. Desse modo, o que se observa é o emprego de estratégias narrativas e estéticas para que o telejornalismo possa se apropriar destes conteúdos gerados por estas máquinas de visibilidade, que trazem às mídias algo que ficaria anteriormente restrito aos bastidores, operando também com sintoma da desfronteirização entre o público e o privado. A partir deste percurso metodológico, intenta-se por fim compreender de que forma estes dispositivos são utilizados para a concretização de novos efeitos de realismo ao jornalismo. / Considering a scenario where the technologic devices that visually register the world acquire omnipresence in everyday life, journalism is in a period of readjustment of its narrative strategies and its modus operandi. This research intents to investigate the changes in TV journalism because of the ubiquity of cameras, which capture images produced both by the media and by external institutions, since they offer to the journalistic enterprises an inexhaustible and irresistible material, because it is surrounded by an expectation of authenticity. We propose then a categorization of these machines, which are systematized as omniscient and omnipresent cameras, for the purpose of understanding the specificity of the phenomenon. In common, all of these cameras point to the search for a realistic aesthetics, based on the recognition of a low media interference. Thus, it is observed that the TV stations use some strategies to adapt these contents in their narratives, which bring to the media something that would previously be restrict to the backstage area. In a sense, they operate as a symptom o the erosion of the boundaries between public and private. With this methodological course, we finally attempt to understand how these technologic devices are used to achieve effects of realism to journalism.
290

Vitrocéramiques infrarouges pour application à la vision nocturne / Infrared glass-ceramics for night vision applications

Petracovschi, Elena 03 October 2014 (has links)
Les verres de chalcogénures sont utilisés en tant qu'optiques pour les caméras IR grâce à leur transparence dans les deux fenêtres atmosphériques [3 – 5 µm] et [8 – 12 µm]. Afin de diminuer leur prix et d'augmenter la gamme des compositions qui pourraient être produites, une nouvelle méthode de synthèse a été élaborée au laboratoire Verres et Céramiques. Les travaux présentés dans ce manuscrit ont ainsi porté sur le développement de la technique de synthèse des verres et vitrocéramiques de chalcogénures par mécanosynthèse et frittage flash, ainsi que sur l'étude de la structure et des propriétés mécaniques des vitrocéramiques. Les différents paramètres de broyage et frittage ont été étudiés et la possibilité de produire des matériaux massifs, avec une structure et des propriétés similaires à celles des verres obtenus par voie classique de fusion-trempe, a été démontrée. Egalement, il a été constaté que la génération des particules cristallines dans la matrice vitreuse permet d'améliorer les propriétés mécaniques sans altérer la transmission optique des échantillons. Finalement, une étude théorique, basée sur la méthode DFT, a été initié pour accéder à des informations plus précises concernant la structure et les propriétés mécaniques des verres et vitrocéramiques de chalcogénures. / Chalcogenide glasses are used as optics for the IR cameras thanks to their transparence in the two atmospheric windows [3 – 5 µm] and [8 – 12 µm]. In order to reduce their price and to increase the panel of compositions which may be produced, a new method of synthesis has been elaborated in the Glass and Ceramics group. Thus, this manuscript presents the development of the new way of synthesis of chalcogenide glasses and glass-ceramics by mechanical milling and SPS sintering, and the study of the structure and mechanical properties of glass-ceramics. The different milling and sintering parameters have been studied and the possibility to produce bulk samples with a structure and properties similar to those of glasses synthesized by melt-quenching method has been demonstrated. Also, it has been shown that the generation of crystalline particles in the glassy matrix increases mechanical properties of the samples without spoiling their optical transmission. Finally, a theoretical study, based on the DFT method, has been initiated in order to access more precise information concerning glass and glass-ceramic structure and mechanical properties.

Page generated in 0.1105 seconds