• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Radiation Hard 3D Diamond Sensors for Vertex Detectors at HL-LHC / Strahlenharte 3D Diamantsensoren für Spurdetektoren am HL-LHC

Graber, Lars 21 January 2016 (has links)
No description available.
2

Characterization of 3D Silicon Pixel Detectors for the ATLAS ITk

Samy, Md. Arif Abdulla 30 June 2022 (has links)
After ten years of massive success, the Large Hadron Collider (LHC) at CERN is going for an upgrade to the next phase, The High Luminosity Large Hadron Collider (HL-LHC) which is planned to start its operation in 2029. This is expected to have a fine boost to its performance, with an instantaneous luminosity of 5.0×1034 cm-2s -1 (ultimate value 7.5×1034 cm-2s -1 ) with 200 average interactions per bunch crossing which will increase the fluences up to more than 1016 neq/ cm2 , resulting in high radiation damage in ATLAS detector. To withstand this situation, it was proposed to make the innermost layer with 3D silicon sensors, which will have radiation tolerance up to 2×1016 neq/cm2 with a Total Ionization Dose of 9.9 MGy. Two-pixel geometries have been selected for 3D sensors, 50 × 50 μm2 for Endcap (ring), which will be produced by FBK (Italy) and SINTEF (Norway), and 25 × 100 μm2 for Barrel (stave), will be produced by CNM (Spain). A discussion is made in this thesis about the production of FBK on both geometries, as they have made a breakthrough with their Stepper lithography process. The yield improved, specifically for the geometry 25 × 100 μm2 with two electrode readouts, which was problematic in the mask aligner approach. Their sensors were characterized electrically at waferlevel as well as after integration with RD53a readout chip (RoC) on single-chip cards (SCC) and were verified against Innermost Tracker criteria. The SCCs were sent for irradiation up to 1×1016 neq/cm2 and were tested under electron test beam, and a hit efficiency of 97% was presented. Some more SCCs have been sent to Los Alamos for irradiating them up to 1.5×1016 neq/cm2 fluence. As the 3D sensors will be mounted as Triplets, a discussion is also made on their assembly and QA/QC process. A reception testing and electrical testing setup both at room temperature and the cold temperature was made and discussed, with results from some early RD53a RoC-based triplets. The pre-production sensors are already evaluated, and soon they will be available bump-bonded with ITkPixV1 RoC for further testing.
3

Contributions au RGBD-SLAM / RGBD-SLAM contributions

Melbouci, Kathia 02 March 2017 (has links)
Pour assurer la navigation autonome d’un robot mobile, les traitements effectués pour sa localisation doivent être faits en ligne et doivent garantir une précision suffisante pour permettre au robot d’effectuer des tâches de haut niveau pour la navigation et l’évitement d’obstacles. Les auteurs de travaux basés sur le SLAM visuel (Simultaneous Localization And Mapping) tentent depuis quelques années de garantir le meilleur compromis rapidité/précision. La majorité des solutions SLAM visuel existantes sont basées sur une représentation éparse de l’environnement. En suivant des primitives visuelles sur plusieurs images, il est possible d’estimer la position 3D de ces primitives ainsi que les poses de la caméra. La communauté du SLAM visuel a concentré ses efforts sur l’augmentation du nombre de primitives visuelles suivies et sur l’ajustement de la carte 3D, afin d’améliorer l’estimation de la trajectoire de la caméra et les positions 3D des primitives. Cependant, la localisation par SLAM visuel présente souvent des dérives dues au cumul d’erreurs, et dans le cas du SLAM visuel monoculaire, la position de la caméra n’est connue qu’à un facteur d’échelle près. Ce dernier peut être fixé initialement mais dérive au cours du temps. Pour faire face à ces limitations, nous avons centré nos travaux de thèse sur la problématique suivante : intégrer des informations supplémentaires dans un algorithme de SLAM visuel monoculaire afin de mieux contraindre la trajectoire de la caméra et la reconstruction 3D. Ces contraintes ne doivent pas détériorer les performances calculatoires de l’algorithme initial et leur absence ne doit pas mettre l’algorithme en échec. C’est pour cela que nous avons choisi d’intégrer l’information de profondeur fournie par un capteur 3D (e.g. Microsoft Kinect) et des informations géométriques sur la structure de la scène. La première contribution de cette thèse est de modifier l’algorithme SLAM visuel monoculaire proposé par Mouragnon et al. (2006b) pour prendre en compte la mesure de profondeur fournie par un capteur 3D, en proposant particulièrement un ajustement de faisceaux qui combine, d’une manière simple, des informations visuelles et des informations de profondeur. La deuxième contribution est de proposer une nouvelle fonction de coût du même ajustement de faisceaux qui intègre, en plus des contraintes sur les profondeurs des points, des contraintes géométriques d’appartenance aux plans de la scène. Les solutions proposées ont été validées sur des séquences de synthèse et sur des séquences réelles, représentant des environnements variés. Ces solutions ont été comparées aux récentes méthodes de l’état de l’art. Les résultats obtenus montrent que les différentes contraintes développées permettent d’améliorer significativement la précision de la localisation du SLAM. De plus les solutions proposées sont faciles à déployer et peu couteuses en temps de calcul. / To guarantee autonomous and safely navigation for a mobile robot, the processing achieved for its localization must be fast and accurate enough to enable the robot to perform high-level tasks for navigation and obstacle avoidance. The authors of Simultaneous Localization And Mapping (SLAM) based works, are trying since year, to ensure the speed/accuracy trade-off. Most existing works in the field of monocular (SLAM) has largely centered around sparse feature-based representations of the environment. By tracking salient image points across many frames of video, both the positions of the features and the motion of the camera can be inferred live. Within the visual SLAM community, there has been a focus on both increasing the number of features that can be tracked across an image and efficiently managing and adjusting this map of features in order to improve camera trajectory and feature location accuracy. However, visual SLAM suffers from some limitations. Indeed, with a single camera and without any assumptions or prior knowledge about the camera environment, rotation can be retrieved, but the translation is up to scale. Furthermore, visual monocular SLAM is an incremental process prone to small drifts in both pose measurement and scale, which when integrated over time, become increasingly significant over large distances. To cope with these limitations, we have centered our work around the following issues : integrate additional information into an existing monocular visual SLAM system, in order to constrain the camera localization and the mapping points. Provided that the high speed of the initial SLAM process is kept and the lack of these added constraints should not give rise to the failure of the process. For these last reasons, we have chosen to integrate the depth information provided by a 3D sensor (e.g. Microsoft Kinect) and geometric information about scene structure. The primary contribution of this work consists of modifying the SLAM algorithm proposed by Mouragnon et al. (2006b) to take into account the depth measurement provided by a 3D sensor. This consists of several rather straightforward changes, but also on a way to combine the depth and visual data in the bundle adjustment process. The second contribution is to propose a solution that uses, in addition to the depth and visual data, the constraints lying on points belonging to the plans of the scene. The proposed solutions have been validated on a synthetic sequences as well as on a real sequences, which depict various environments. These solutions have been compared to the state of art methods. The performances obtained with the previous solutions demonstrate that the additional constraints developed, improves significantly the accuracy and the robustness of the SLAM localization. Furthermore, these solutions are easy to roll out and not much time consuming.
4

Application development of 3D LiDAR sensor for display computers

Ekstrand, Oskar January 2023 (has links)
A highly accurate sensor for measuring distances, used for creating high-resolution 3D maps of the environment, utilize “Light Detection And Ranging” (LiDAR) technology. This degree project aims to investigate the implementation of 3D LiDAR sensors into off-highway vehicle display computers, called CCpilots. This involves a study of available low-cost 3D LiDAR sensors on the market and development of an application for visualizing real time data graphically, with room for optimization algorithms. The selected LiDAR sensor is “Livox Mid-360”, a hybrid-solid technology and a field of view of 360° horizontally and 59° vertically. The LiDAR application was developed using Livox SDK2 combined with a C++ back-end, in order to visualize data using Qt QML as the Graphical User Interface design tool. A filter was utilized from the Point Cloud Library (PCL), called a voxel grid filter, for optimization purpose. Real time 3D LiDAR sensor data was graphically visualized on the display computer CCpilot X900. The voxel grid filter had a few visual advantages, although it consumed more processor power compared to when no filter was used. Whether a filter was used or not, all points generated by the LiDAR sensor could be processed and visualized by the developed application without any latency.
5

Methods for 3D Structured Light Sensor Calibration and GPU Accelerated Colormap

Kurella, Venu January 2018 (has links)
In manufacturing, metrological inspection is a time-consuming process. The higher the required precision in inspection, the longer the inspection time. This is due to both slow devices that collect measurement data and slow computational methods that process the data. The goal of this work is to propose methods to speed up some of these processes. Conventional measurement devices like Coordinate Measuring Machines (CMMs) have high precision but low measurement speed while new digitizer technologies have high speed but low precision. Using these devices in synergy gives a significant improvement in the measurement speed without loss of precision. The method of synergistic integration of an advanced digitizer with a CMM is discussed. Computational aspects of the inspection process are addressed next. Once a part is measured, measurement data is compared against its model to check for tolerances. This comparison is a time-consuming process on conventional CPUs. We developed and benchmarked some GPU accelerations. Finally, naive data fitting methods can produce misleading results in cases with non-uniform data. Weighted total least-squares methods can compensate for non-uniformity. We show how they can be accelerated with GPUs, using plane fitting as an example. / Thesis / Doctor of Philosophy (PhD)
6

價值單元展開分析法與策略矩陣:以3D感測產業新創科技公司競爭策略為例 / Value unit expansion analysis and strategic matrix : competitive strategy of technology startup company in 3D sensor industry

黃紹峯, Huang, Shao-Feng Unknown Date (has links)
本研究以策略管理理論「策略形態分析」法與「策略矩陣分析」法為基礎,發展出新的分析架構–「價值單元展開分析」法,並與策略矩陣整合成為策略系統模型。藉由個案研究,驗證此策略系統模型如何使管理者能更精確地衡量未來策略規劃之選擇,最佳化公司未來競爭策略方向。 筆者以個案探討新創科技公司在持續變動與競爭之產業中,競爭策略分析與規劃之方法。以3D感測產業作為分析與研究之個案產業,並以其中一新創科技公司–L科技公司作為個案探討之對象,提出一以新創科技公司為出發點,在衡量現有與發展未來競爭策略時,可供利用與權衡的分析方法。 「價值單元展開分析」法追尋理性與系統化思考,為公司導入嚴謹明確、量化、及標準化的度量衡工具,使公司得以更加周詳完備與深入地分析其條件前提,將策略最佳化。且此架構具高度延展性,得充分配合公司進行各種思考面向之分析。此外,「價值單元展開分析」法亦可完全結合與相容於「策略矩陣分析」法之中,形成易於溝通與同步的策略系統模型,利用此特點解決公司在策略規劃時可能遭遇之混沌與困難,將效率、效益、及效能用在得當之處,使公司未來發展更符合目標。 / This study investigates the methods of analyzing and planning competitive strategies, with a case study of a technology startup company – “L Company” in the 3D sensor industry. Based on strategy analysis and Strategic Matrix Analysis, this study developed a new analytic framework – “Value Unit Expansion Analysis” which is a systematic model integrated into Strategic Matrix Analysis. By verifying the systematic model through the case, this study proposed a more precise and optimization–capable method to form a company’s future competitive strategies. Value Unit Expansion Analysis is a clarifying, quantifying, and standardizing method for the company to pave the path for factor condition analysis and strategic optimization. The structure was designed with a high flexibility in correspondence to different considerations and changing aspects; it is compatible and can be built within Strategic Matrix Analysis. The systematic model is liable for communication and synchronization during the process of strategy planning and, therefore, setting higher efficiency, benefit, and utility for pertinent strategies that fit into companies’ goals.
7

Reusage classification of damaged Paper Cores using Supervised Machine Learning

Elofsson, Max, Larsson, Victor January 2023 (has links)
This paper consists of a project exploring the possibility to assess paper code reusability by measuring chuck damages utilizing a 3D sensor and usingMachine Learning to classify reusage. The paper cores are part of a rolling/unrolling system at a paper mill whereas a chuck is used to slow and eventually stop the revolving paper core, which creates damages that at a certain point is too grave for reuse. The 3D sensor used is a TriSpector1008from SICK, based on active triangulation through laser line projection and optic sensing. A number of paper cores with damages varying in severity labeled approved or unapproved for further use was provided. SupervisedLearning in the form of K-NN, Support Vector Machine, Decision Trees andRandom Forest was used to binary classify the dataset based on readings from the sensor. Features were extracted from these readings based on the spatial and frequency domain of each reading in an experimental way.Classification of reusage was previously done through thresholding on internal features in the sensor software. The goal of the project is to unify the decision making protocol/system with economical, environmental and sustainable waste management benefits. K-NN was found to be best suitedin our case. Features for standard deviation of calculated depth obtained from the readings, performed best and lead to a zero false positive rate and recall score of 99.14%, outperforming the compared threshold system. / Den här rapporten undersöker möjligheten att bedöma papperskärnors återanvändbarhet genom att mäta chuck skador med hjälp av en 3D-sensor för att genom maskininlärning klassificera återanvändning. Papperskärnorna används i ett rullnings-/avrullningssystem i ett pappersbruk där en chuck används för att bromsa och till sist stoppa den roterande papperskärnan, vilket skapar skador som vid en viss punkt är för allvarliga för återanvändning. 3D-sensorn som används är en TriSpector1008 från SICK,baserad på aktiv triangulering genom laserlinje projektion och optisk avläsning. Projektet försågs med ett antal papperskärnor med varierande skador, märkta godkända eller ej godkända för vidare användning av leverantören. Supervised Learning i form av K-NN, Support VectorMachine, Decision Trees och Random Forest användes för att binärt klassificera datasetet baserat på avläsningar från sensorn. Features Extraherades från dessa avläsningar baserat på spatial och frekvensdomänen för varje avläsning på ett experimentellt sätt. Klassificering av återanvändning gjordes tidigare genom tröskelvärden på interna features isensorns mjukvara. Målet med projektet är att skapa ett enhetligtbeslutsprotokoll/system med ekonomiska, miljömässiga och hållbaraavfallshanteringsfördelar. K-NN visades vara bäst lämpad för projektet.Featuerna representerande standardavvikelse för beräknat djup som erhållits från avläsningarna visades vara bäst och leder till en false positive rate lika med noll och recall score på 99.14%, vilket överpresterade det jämförda tröskel systemet.

Page generated in 0.0586 seconds