• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 155
  • 25
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 289
  • 289
  • 107
  • 72
  • 70
  • 57
  • 40
  • 38
  • 35
  • 34
  • 31
  • 31
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

High Performance Phased Array Platform for LiDAR Applications

Zadka, Moshe January 2020 (has links)
Light Detection and Ranging (LiDAR) systems are expected to become the de facto sensors of choice for autonomous vehicles and robotics systems due to their long range and high resolution, allowing them to map the environment accurately. Current available LiDAR systems are based on mechanical apparatus and discrete components that result in large, bulky, and expensive systems with yet-to-be-proven reliability. The advent of Silicon Photonics technology, advanced CMOS foundries allow us to fabricate miniaturized optical components such as phased arrays that combined enable reliable, solid-state, and cost-effective chip-scale LiDAR systems. Furthermore, Silicon Photonics based platform has the advantage of integrating many complex optical components in to a single chip. It is possible to realize an optical phased array based on waveguides with gratings for emitters. These emitters allow to steer the beam by tuning the source's wavelength exploiting the grating's sensitivity to wavelength in one axis and standard phase tuning on the other axis. Such a steering scheme requires only N phase shifters for an N-channel system thus leading to high power efficiency. Another example that could leverage the Silicon Photonics platform is a full coherent LiDAR system utilizing Frequency-Modulated Continuous-Wave (FMCW) detection scheme that was recently reported. However, miniaturizing a LiDAR system to chip-scale has many challenges. The work in this dissertation presents solutions to some of the key challenges we face in order to demonstrate high performance LiDAR based on phased array. One key challenge is the trade-off between beam divergence and field of view. Here, we show a platform based on silicon-nitride/silicon that achieves simultaneously minimal beam divergence and maximum field of view while maintaining performance that is robust to fabrication variations. In addition, in order to maximize the emission from the entire length of the grating, we design the grating’s strength by varying its duty cycle (apodization) to emit uniformly. We fabricate a millimeter long grating emitter with diffraction-limited beam divergence of 0.089°. Another challenge that is intertwined with the aperture length mention before is how maximizing the steering range in an optical phased array. The array's field of view that is perpendicular to the light propagation is governed by the spacing between emitters. In contrast to Radio Frequency based devices, achieving maximum field of view by placing the emitters at half wavelength pitch to avoid side lobes, is challenging for optical phased arrays as the size of the mode is comparable to the wavelength that give rise to cross-talk issues. Emitter pitch that is larger than half the wavelength induce grating lobes in the steered range, effectively limiting the field of view. The closer together the waveguides, the shorter emitters must be to avoid cross-talk, fundamentally limiting the spot size at the farfield. Cross-talk between waveguides induces wavefront aberrations in the beam, thereby increasing beam divergence and limiting the system resolution and range. Here, we improve the mode confinement in the waveguide by increasing the index along the waveguide axis. We use thin Silicon rods, known as metamaterials, between the emitters to tightly confine the mode in the waveguide. Concentrating the mode in the waveguide reduces cross-talk between emitters and maximizes the optical phased array field of view. By embedding an array in a Mach–Zehnder interferometer we demonstrate a sensitive method of measuring cross-talk between the waveguide. We also measure in the nearfield the width of an array of waveguides over a millimeter long emitters. We show that by using the metamaterials we can realize a dense array with a pitch of 1.2 µm over a millimeter long waveguides with gratings at negligible cross-talk. This short pitch allows for 83° steering angle range (Field of View). Combining this the work of Silicon Nitride based long gratings, will allow for a LiDAR system with minimal beam divergence while achieving record large Field of View. Finally, the last chapter discusses Subwavelength Grating structures that due to their sub-wavelength dimensions guide light without diffraction. These structures allow us to tailor the required effective index by varying their duty cycle. We evaluate their robustness to fabrication variations by embedding them inside a sensitive race track. Using this resonator we measured the sensitivity of Subwavelength Grating structures to an off-set in the element's location, elements' width, duty cycle variation, and width change of a single element. Lastly, we show that due to their periodic structure, they are also robust to as many as three consecutive missing elements. This protection property opens the possibility of realizing a plethora of new devices not possible using wire waveguides. One such example is a T-splitter in which an incoming Transverse Magnetic polarized mode could be split to two separate branches at a 90° angle. The demonstrated platform we show here paves the way for on-chip LiDAR systems for autonomous automotive, robotics, wireless communications, and particle trapping.
12

OPTIMIZATION OF VEHICLE DYNAMICS FOR ENHANCED CLASS 8 TRUCK PLATOONING

Brady Black (9500207) 16 December 2020 (has links)
<div>The heavy duty transportation sector is projected to grow in the coming decades. Increasing the fuel economy of class 8 vehicles would simultaneously decrease CO2 emissions and decrease the annual fuel expenditures that account for nearly a quarter of cargo companies' annual budgets. Most technology that has aimed to do this has primarily been focused on either improvements in engine efficiency or reduction of aerodynamic drag. This thesis addresses a somewhat different approach: the optimization of vehicle dynamics in order to realize fuel savings. </div><div><br></div><div>Through partnerships with Peloton Technology and Cummins, tests and simulations were conducted on corridors with grades up to 5% that indicate fuel savings of up to 14.4% can be achieved through the combination of three strategies: two-truck platooning, long-horizon predictive cruise control (LHPCC), and simultaneous shifting. Two-truck platooning is the act of drafting a rear truck behind a front truck. It has been shown that this not only reduces the drag of the follow vehicle, but also that of the lead vehicle. LHPCC is an optimization of the lead truck's velocity over a given corridor to get "from point A to point B" in the most efficient way possible whilst doing so with a trip time constraint. Last is the use of simultaneous shifting, which allows the follow vehicle to maintain the proper platoon gap distance behind</div><div>the lead truck.</div>
13

3D Object Detection for Advanced Driver Assistance Systems

Demilew, Selameab 29 June 2021 (has links)
Robust and timely perception of the environment is an essential requirement of all autonomous and semi-autonomous systems. This necessity has been the main factor behind the rapid growth and adoption of LiDAR sensors within the ADAS sensor suite. In this thesis, we develop a fast and accurate 3D object detector that converts raw point clouds collected by LiDARs into sparse occupancy cuboids to detect cars and other road users using deep convolutional neural networks. The proposed pipeline reduces the runtime of PointPillars by 43% and performs on par with other state-of-the-art models. We do not gain improvements in speed by compromising the network's complexity and learning capacity but rather through the use of an efficient input encoding procedure. In addition to rigorous profiling on three different platforms, we conduct a comprehensive error analysis and recognize principal sources of error among the predicted attributes. Even though point clouds adequately capture the 3D structure of the physical world, they lack the rich texture information present in color images. In light of this, we explore the possibility of fusing the two modalities with the intent of improving detection accuracy. We present a late fusion strategy that merges the classification head of our LiDAR-based object detector with semantic segmentation maps inferred from images. Extensive experiments on the KITTI 3D object detection benchmark demonstrate the validity of the proposed fusion scheme.
14

Improving Parking Efficiency Using Lidar in Autonomous Vehicles (AV)

Albabah, Noraldin 24 March 2021 (has links)
No description available.
15

Mapping a Semi-Structured Mixed Environment Using a Data-Driven Occupancy Model

Jabr, Bander A. January 2021 (has links)
No description available.
16

Machine-Learning-Enabled Cooperative Perception on Connected Autonomous Vehicles

Guo, Jingda 12 1900 (has links)
The main research objective of this dissertation is to understand the sensing and communication challenges to achieving cooperative perception among autonomous vehicles, and then, using the insights gained, guide the design of the suitable format of data to be exchanged, reliable and efficient data fusion algorithms on vehicles. By understanding what and how data are exchanged among autonomous vehicles, from a machine learning perspective, it is possible to realize precise cooperative perception on autonomous vehicles, enabling massive amounts of sensor information to be shared amongst vehicles. I first discuss the trustworthy perception information sharing on connected and autonomous vehicles. Then how to achieve effective cooperative perception on autonomous vehicles via exchanging feature maps among vehicles is discussed in the following. In the last methodology part, I propose a set of mechanisms to improve the solution proposed before, i.e., reducing the amount of data transmitted in the network to achieve an efficient cooperative perception. The effectiveness and efficiency of our mechanism is analyzed and discussed.
17

Autonomous Vehicle Path Planning with Remote Sensing Data

Dalton, Aaron James 22 January 2009 (has links)
Long range path planning for an autonomous ground vehicle with minimal a-priori data is still very much an open problem. Previous research has demonstrated that least cost paths generated from aerial LIDAR and GIS data could play a role in automatically determining suitable routes over otherwise unknown terrain. However, most of this research has been theoretical. Consequently, there is very little literature the effectiveness of these techniques in plotting paths of an actual autonomous vehicle. This research aims to develop an algorithm for using aerial LIDAR and imagery to plan paths for a full size autonomous car. Methods of identifying obstacles and potential roadways from the aerial LIDAR and imagery are reviewed. A scheme for integrating the path planning algorithms into the autonomous vehicle existing systems was developed and eight paths were generated and driven by an autonomous vehicle. The paths were then analyzed for their drivability and the model itself was validated against the vehicle measurements. The methods described were found to be suitable for generating paths both on and off road. / Master of Science
18

Autonomous Edge Cities:Revitalizing Suburban Commercial Centers with Autonomous Vehicle Technology and New (sub)Urbanist Principles

Burgei, David January 2017 (has links)
No description available.
19

Conversion of a Hybrid Electric Vehicle to Drive by Wire Status

Mathur, Kovid January 2010 (has links)
No description available.
20

Development of Real Time Self Driving Software for Wheeled Robot with UI based Navigation

Keshavamurthi, Karthik Balaji 26 August 2020 (has links)
Autonomous Vehicles are complex modular systems with various inter-dependent safety critical modules, the failure of which leads to failure of the overall system. The Localization system, which estimates the pose of the vehicle in the global coordinate frame with respect to a map, has a drift in error, when operated only based on data from proprioceptive sensors. Current solutions to the problem are computationally heavy SLAM algorithms. An alternate system is proposed in the thesis which eliminates the drift by resetting the global coordinate frame to the local frame at every motion planning update. The system replaces the mission planner with a user interface(UI) onto which the User provides local navigation inputs, thus eliminating the need for maintenance of a Global frame. The User Input is considered in the decision framework of the behavioral planner, which selects a safe and legal maneuver for the vehicle to follow. The path and trajectory planners generate a trajectory to accomplish the maneuver and the controller follows the trajectory until the next motion planning update. A prototype of the system has been built on a wheeled robot and tested for the feasibility of continuous operation in Autonomous Vehicles. / Master of Science / Autonomous Vehicles are complex modular systems with various inter-dependent safety critical modules, the failure of which leads to failure of the overall system. One such module is the Localization system, that is responsible for estimating the pose of the vehicle in the global coordinate frame, with respect to a map. Based on the pose, the vehicle navigates to the goal waypoints, which are points in the global coordinate frame specified in the map by the route or mission planner of the planning module. The Localization system, however, consists of a drift in position error, due to poor GPS signals and high noise in the inertial sensors. This has been tackled by applying computationally heavy Simultaneous Localization and Mapping based methods, which identify landmarks in the environment at every time step and correct the vehicle position, based on the relative change in position of landmarks. An alternate solution is proposed in this thesis, which delegates navigation to the passenger. This system replaces the mission planner from the planning module with a User Interface onto which the passenger provides local Navigation input, which is followed by the vehicle. The system resets the global coordinate frame to the vehicle frame at every motion planning update, thus eliminating the error accumulated between the two updates. The system is also designed to perform default actions in the absence of user Navigation commands, to reduce the number of commands to be provided by the passenger in the journey towards the goal. A prototype of the system is built and tested for feasibility.

Page generated in 0.0285 seconds