• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 53
  • 44
  • 19
  • 19
  • 19
  • 19
  • 19
  • 19
  • 18
  • 8
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 436
  • 436
  • 75
  • 67
  • 55
  • 47
  • 45
  • 43
  • 38
  • 35
  • 32
  • 31
  • 30
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Learning Mobile Manipulation

Watkins, David Joseph January 2022 (has links)
Providing mobile robots with the ability to manipulate objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the environment layout and manipulatable objects. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, researchers used heuristic and simple rule-based strategies to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. The work in this thesis will demonstrate how to build a system for robotic mobile manipulation that is robust to changes in these variables. This robustness will be enabled by recent simultaneous advances in the fields of big data, deep learning, and simulation. The ability of simulators to create realistic sensory data enables the generation of massive corpora of labeled training data for various grasping and navigation-based tasks. It is now possible to build systems that work in the real world trained using deep learning entirely on synthetic data. The ability to train and test on synthetic data allows for quick iterative development of new perception, planning and grasp execution algorithms that work in many environments. To build a robust system, this thesis introduces a novel multiple-view shape reconstruction architecture that leverages unregistered views of the object. To navigate to objects without localizing the agent, this thesis introduces a novel panoramic target goal architecture that takes previous views of the agent to inform a policy to navigate through an environment. Additionally, a novel next-best-view methodology is introduced to allow the agent to move around the object and refine its initial understanding of the object. The results show that this deep learned sim-to-real approach performs best when compared to heuristic-based methods in terms of reconstruction quality and success-weighted-by-path-length (SPL). This approach is also adaptable to the environment and robot chosen due to its modular design.
282

Design, Synthesis and Study of Supramolecular Donor – Acceptor Systems Mimicking Natural Photosynthesis Processes

KC, Chandra Bikram 12 1900 (has links)
This dissertation investigates the chemical ingenuity into the development of various photoactive supramolecular donor – acceptor systems to produce clean and carbon free energy for the next generation. The process is inspired by the principles learned from nature’s approach where the solar energy is converted into the chemical energy through the natural photosynthesis process. Owing to the importance and complexity of natural photosynthesis process, we have designed ideal donor-acceptor systems to investigate their light energy harvesting properties. This process involves two major steps: the first step is the absorption of light energy by antenna or donor systems to promote them to an excited electronic state. The second step involves, the transfer of excitation energy to the reaction center, which triggers an electron transfer process within the system. Based on this principle, the research is focused into the development of artificial photosynthesis systems to investigate dynamics of photo induced energy and electron transfer events. The derivatives of Porphyrins, Phthalocyanines, BODIPY, and SubPhthalocyanines etc have been widely used as the primary building blocks for designing photoactive and electroactive ensembles in this area because of their excellent and unique photophysical and photochemical properties. Meanwhile, the fullerene, mainly its readily available version C60 is typicaly used as an electron acceptor component because of its unique redox potential, symmetrical shape and low reorganization energy appropriate for improved charge separation behavior. The primary research motivation of the study is to achieve fast charge separation and slow charge recombination of the system by stabilizing the radical ion pairs which are formed from photo excitation, for maximum utility of solar energy. Besides Fullerene C60, this dissertation has also investigated the potential application of carbon nanomaterials (Carbon nanotubes and graphene) as primary building blocks for the study of the artificial photosynthesis process.
283

Design of Power-Efficient Optical Transceivers and Design of High-Linearity Wireless Wideband Receivers

Zhang, Yudong January 2021 (has links)
The combination of silicon photonics and advanced heterogeneous integration is promising for next-generation disaggregated data centers that demand large scale, high throughput, and low power. In this dissertation, we discuss the design and theory of power-efficient optical transceivers with System-in-Package (SiP) 2.5D integration. Combining prior arts and proposed circuit techniques, a receiver chip and a transmitter chip including two 10 Gb/s data channels and one 2.5 GHz clocking channel are designed and implemented in 28 nm CMOS technology. An innovative transimpedance amplifier (TIA) and a single-ended to differential (S2D) converter are proposed and analyzed for a low-voltage high-sensitivity receiver; a four-to-one serializer, programmable output drivers, AC coupling units, and custom pads are implemented in a low-power transmitter; an improved quadrature locked loop (QLL) is employed to generate accurate quadrature clocks. In addition, we present an analysis for inverter-based shunt-feedback TIA to explicitly depict the trade-off among sensitivity, data rate, and power consumption. At last, the research on CDR-based​ clocking schemes for optical links is also discussed. We introduce prior arts and propose a power-efficient clocking scheme based on an injection-locked phase rotator. Next, we analyze injection-locked ring oscillators (ILROs) that have been widely used for quadrature clock generators (QCGs) in multi-lane optical or wireline transceivers due to their low power, low area, and technology scalability. The asymmetrical or partial injection locking from 2 phases to 4 phases results in imbalances in amplitude and phase. We propose a modified frequency-domain analysis to provide intuitive insight into the performance design trade-offs. The analysis is validated by comparing analytical predictions with simulations for an ILRO-based QCG in 28 nm CMOS technology. This dissertation also discusses the design of high-linearity wireless wideband receivers. An out-of-band (OB) IM3 cancellation technique is proposed and analyzed. By exploiting a baseband auxiliary path (AP) with a high-pass feature, the in-band (IB) desired signal and out-of-band interferers are split. OB third-order intermodulation products (IM3) are reconstructed in the AP and cancelled in the baseband (BB). A 0.5-2.5 GHz frequency-translational noise-cancelling (FTNC) receiver is implemented in 65nm CMOS to demonstrate the proposed approach. It consumes 36 mW without cancellation at 1 GHz LO frequency and 1.2 V supply, and it achieves 8.8 MHz baseband bandwidth, 40dB gain, 3.3dB NF, 5dBm OB IIP3, and −6.5dBm OB B1dB. After IM3 cancellation, the effective OB-IIP3 increases to 32.5 dBm with an extra 34 mW for narrow-band interferers (two tones). For wideband interferers, 18.8 dB cancellation is demonstrated over 10 MHz with two −15 dBm modulated interferers. The local oscillator (LO) leakage is −92 dBm and −88 dB at 1 GHz and 2 GHz LO respectively. In summary, this technique achieves both high OB linearity and good LO isolation.
284

Instrumented Footwear and Machine Learning for Gait Analysis and Training

Prado de la Mora, Jesus Antonio January 2021 (has links)
Gait analysis allows clinicians and researchers to quantitatively characterize the kinematics and kinetics of human movement. Devices that quantify gait can be either portable, such as instrumented shoes, or non-portable, such as motion capture systems and instrumented walkways. There is a tradeoff between these two classes of systems in terms of portability and accuracy. However, recent computer advances allow for the collection of meaningful data outside of the clinical setting. In this work, we present the DeepSole system combined with the different neural network models. This system is a fully capable to characterize the gait of the individuals and provide vibratory feedback to the wearer. Thanks to the flexible construction and its wireless capabilities, it can be comfortably worn by wide arrange of people, both able-bodied and people with pathologies that affect their gait. It can be used for characterization, training, and as an abstract sensor to measure human gait in real-time. Three neural network models were designed and implemented to map the sensors embedded in the DeepSole system to gait characteristics and events. The first one is a recurrent neural network that classifies the gait into the correct gait phase of the wearer. This model was validated with data from healthy young adults and children with Cerebral Palsy. Furthermore, this model was implemented in real-time to provide vibratory feedback to healthy young adults to create temporal asymmetry on the dominant side during regular walking. During the experiment, the subjects who walked had an increased stance time on both sides, but the dominant side was affected more. The second model is encoder-decoder recurrent neural network that maps the sensors into current gait cycle percentage. This model is useful to provide continuous feedback that is synchronized to the gait. This model was implemented in real-time to provide vibratory feedback to six muscle groups used during regular walking. The effects of the vibration were analyzed. It was found that depending on the feedback, the subjects changed their spatial and temporal gait parameters. The third model uses all the sensors in the instrumented footwear to identify a motor phenomenon called freezing of gait in patients with Parkinson’s Disease. This phenomenon is characterized by transient periods, usually lasting for several seconds, in which attempted ambulation is halted. The model has better performance than the state-of-the-art and does not require any pre-processing. The DeepSole system when used in conjunction with the presented models is able to characterize and provide feedback in a wide range of scenarios. The system is portable, comfortable, and can accommodate a wide range of populations who can benefit from this wearable technology.
285

Essays on the Applications of Machine Learning in Financial Markets

Wang, Muye January 2021 (has links)
We consider the problems commonly encountered in asset management such as optimal execution, portfolio construction, and trading strategy implementation. These problems are generally difficult in practice, in large part due to the uncertainties in financial markets. In this thesis, we develop data-driven approaches via machine learning to better address these problems and improve decision making in financial markets. Machine learning refers to a class of statistical methods that capture patterns in data. Conventional methods, such as regression, have been widely used in finance for many decades. In some cases, these methods have become important building blocks for many fundamental theories in empirical financial studies. However, newer methods such as tree-based models and neural networks remain elusive in financial literature, and their usabilities in finance are still poorly understood. The objective of this thesis is to understand the various tradeoffs these newer machine learning methods bring, and to what extent they can improve a market participant’s utility. In the first part of this thesis, we consider the decision between the use of market orders and limit orders. This is an important question in practical optimal trading problems. A key ingredient in making this decision is understanding the uncertainty of the execution of a limit order, that is, the fill probability or the probability that an order will be executed within a certain time horizon. Equivalently, one can estimate the distribution of the time-to-fill. We propose a data-driven approach based on a recurrent neural network to estimate the distribution of time-to-fill for a limit order conditional on the current market conditions. Using a historical data set, we demonstrate the superiority of this approach to several benchmark techniques. This approach also leads to significant cost reduction while implementing a trading strategy in a prototypical trading problem. In the second part of the thesis, we formulate a high-frequency optimal execution problem as an optimal stopping problem. Through reinforcement learning, we develop a data-driven approach that incorporates price predictabilities and limit order book dynamics. A deep neural network is used to represent continuation values. Our approach outperforms benchmark methods including a supervised learning method based on price prediction. With a historic NASDAQ ITCH data set, we empirically demonstrate a significant cost reduction. Various tradeoffs between Temporal Difference learning and Monte Carlo method are also discussed. Another interesting insight is the existence of a certain universality across stocks — the patterns learned from trading one stock can be generalized to another stock. In the last part of the thesis, we consider the problem of estimating the covariance matrix of high-dimensional asset return. One of the conventional methods is through the use of linear factor models and their principal component analysis estimation. In this chapter, we generalize linear factor models to a general framework of nonlinear factor models using variational autoencoders. We show that linear factor models are equivalent to a class of linear variational autoencoders. Further- more, nonlinear variational autoencoders can be viewed as an extension to linear factor models by relaxing the linearity assumption. An application of covariance estimation is to construct minimum variance portfolio. Through numerical experiments, we demonstrate that variational autoencoder improves upon linear factor models and leads to a more superior minimum variance portfolio.
286

Towards Generalist Robots through Visual World Modeling

Chen, Boyuan January 2022 (has links)
Moving from narrow robots specializing in specific tasks to generalist robots excelling in multiple tasks in various environmental conditions is the future of next-generation robotics. The key to generalist robots is the ability to learn world models that are reusable, generalizable, and adaptable. Having a general understanding of how the physical world works will enable robots to acquire transferable knowledge across different tasks, predict possible outcomes of future actions before execution, and constantly update their knowledge through continual interactions. While the majority of robot learning frameworks tend to mix task-related and task-agnostic components altogether throughout the learning process, these two components are often not intertwined when one of them is changed. For example, a task-agnostic component such as the computational model of the robot body remains the same even under different task settings, while a task-related component such as the dynamics of a moving object remains the same for different embodiments. This thesis studies the key steps towards building generalist robots by decomposing the world modeling problem into task-agnostic and task-related elements: (1) robot self-modeling; (2) robot modeling other agents; and (3) robot modeling the physical environment. This framework has produced powerful and efficient learning-based robotic systems for a variety of tasks and physical embodiments, such as computational models of physical robots that can be reused and adapted to numerous task objectives and changing environments, behavior modeling frameworks for complex multi-robot applications, and dynamical system understanding algorithms to distill compact physics knowledge from high-dimensional and multi-modal sensory data. The approach in this thesis could help catalyze the understanding, prediction, and control of increasingly complex systems.
287

Soft actuator and agile soft robot

Xia, Boxi January 2022 (has links)
Robots play an important part in many aspects of our society by doing repetitive, dangerous, or precision tasks. Most existing robots are made of rigid components, which lack passive compliance and pose a challenge in adapting to the environment and safe human-robot interaction. Rigid robots may be equipped with sensors and programmed with proprioceptive feedback control to achieve active compliance, but this may fail in the event of unforeseen situations or sensor failure. In contrast, animals have evolved flexible or soft body parts to help them adapt to changing environments. Soft robotics is an emerging field in robotics, drawing inspiration from nature by integrating soft material into the actuator and mechanical design. With the inclusion of soft material, soft actuators and robots can deform actively/passively, making it possible to sense, absorb impact, and adapt to its environment with deformation. However, while soft actuators/robots have superior properties to rigid ones, they are often challenging to manufacture and control precisely. In addition, they may suffer from slow speed and material degradation. Thus, in this thesis, we aim to address the issues in developing high-performance soft actuators and soft robots. The thesis is divided into two parts. In the first part, we focus on improving the manufacturability and performance of a self-contained soft actuator originated in the Creative Machines Lab. The soft actuator is composed of a cured silicone-ethanol mixture embedded with heating coils. When the coils are electrically actuated, ethanol trapped inside undergoes liquid-vapor transitions, and thus the actuator undergoes extreme volume change. While this actuator exhibits high strain and high stress, it is very slow to actuate, has limited life cycles, and requires molds to manufacture. The first part of the thesis will address these issues. Specifically, in chapter 2, we discuss using multi-material 3D printing to automate the manufacturing of silicone-ethanol composite. In chapter 3, we discuss using laser-cut flexible Kirigami patterns to improve the manufacturability of its heating element. Chapter 4 characterizes its actuation profile and addresses improvements to the thermal conductivity by infusing thermally conductive fillers. Soft actuation is an actively researched area; however, many high-performance soft actuators are challenging to manufacture and thus are less accessible to the general robotics community. Conventional actuators such as electric motors are widely available but lack flexibility. Therefore, the second part of the thesis aims at combining rigid motors with soft materials to design and control high-performance hybrid soft robots. Simulation is a good way to evaluate and optimize robot design and control. However, existing simulators that support motor-driven soft robots have limited features. Chapter 5 discusses this issue and presents a physically based real-time soft robot simulator capable of simulating motor-driven soft robots. In addition, chapter 5 presents the design and control of a 3D printed hybrid soft quadruped robot. Chapter 6 presents the design and control of a 3D printed hybrid soft humanoid robot. The two parts of the thesis aim to improve aspects in soft actuators and soft robots. In conclusion, we summarize the lessons learned in developing soft actuators/robots and new possibilities and challenges for advancing soft robotics research.
288

Machine Learning for AI-Augmented Design Space Exploration of Computer Systems

Kwon, Jihye January 2022 (has links)
Advanced and emerging computer systems, ranging from supercomputers to embedded systems, feature high performance, energy efficiency, acceleration, and specialization. Design of such systems involves ever-increasing circuit complexity and architectural diversity. Commercial high-end processors, realized as very-large-scale integration circuits, have integrated exponentially increasing number of transistors on a chip over many decades. Along with the evolution of semiconductor manufacturing technology, another driving force behind the progress of processors has been the development of computer-aided design (CAD) software tools. Logic synthesis and physical design (LSPD) tool-chains allow designers to describe the computer system at the register-transfer level of abstraction and automatically convert the description into an integration circuit layout. The slowdown of technology scaling, on the other hand, has motivated the emergence of dark silicon and heterogeneous architectures with application-specific hardware accelerators. Design of various accelerators is facilitated by high-level synthesis (HLS) tools that translate a behavioral description of a computer system into a structural register-transfer level one. CAD approaches have evolved towards raising the level of design abstraction and providing more options to optimize the architecture. For each system synthesized via advanced CAD tools, designers explore the design space in search of optimal configurations of the tool options and architectural choices, also called 𝘬𝘯𝘰𝘣𝘴. These knobs affect the execution of CAD algorithms and eventually impact the multi-dimensional 𝘲𝘶𝘢𝘭𝘪𝘵𝘺-𝘰𝘧-𝘳𝘦𝘴𝘶𝘭𝘵 (𝘘𝘰𝘙) of the final implementation. During design-space exploration (DSE), designers leverage their experience and expertise pertaining to determining the relationship between knobs and QoR. To further reduce the number of time and resource consuming CAD runs during DSE, a large number of heuristic and model-based approaches have been proposed. More recently, the rise of machine learning (ML) and artificial intelligence (AI) has prompted the possibility of AI-augmented DSE which exploits ML techniques to predict the knobs-QoR relationship. Yet, existing heuristic and ML-based approaches still require a sufficient number of CAD runs for each system because they do not accumulate and exploit experiential knowledge across the systems as designers would do. To expand the potential of AI-augmented DSE and push the frontier forward, multiple challenges arise due to the characteristics of CAD flows. 1) Whereas many ML applications utilize data obtained from huge collections of users' input and public databases for a single problem, the QoR-prediction problem for each system suffers from limited availability of data obtained from expensive CAD runs. Especially, an industrial LSPD tool-chain specifies hundreds of separate knobs, resulting in an extreme curse of dimensionality. 2) Different systems exhibit different knobs-QoR relationship. Hence, learning from previously explored systems needs to be preceded by identifying distinct systems and relating them to one another. Often, it is difficult to obtain an efficient representation of a system. 3) Designers often apply different sets of knob configurations to different systems, which makes it harder to learn from previous DSE results. Especially in HLS, the heterogeneity of various systems leads to broad knob heterogeneity across them. To address these challenges and boost the ML performance, I propose to flexibly connect the elements of the many QoR-prediction problems with one another. My thesis is that 𝘵𝘩𝘦 𝘦𝘹𝘱𝘭𝘰𝘳𝘢𝘵𝘪𝘰𝘯 𝘰𝘧 𝘵𝘩𝘦 𝘥𝘦𝘴𝘪𝘨𝘯 𝘴𝘱𝘢𝘤𝘦 𝘰𝘧 𝘢 𝘤𝘰𝘮𝘱𝘶𝘵𝘦𝘳 𝘴𝘺𝘴𝘵𝘦𝘮 𝘤𝘢𝘯 𝘣𝘦 𝘦𝘧𝘧𝘦𝘤𝘵𝘪𝘷𝘦𝘭𝘺 𝘢𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘣𝘺 𝘢𝘳𝘵𝘪𝘧𝘪𝘤𝘪𝘢𝘭 𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘤𝘦 𝘷𝘪𝘢 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨 𝘧𝘳𝘰𝘮 𝘵𝘩𝘦 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦 𝘸𝘪𝘵𝘩 𝘵𝘩𝘦 𝘥𝘦𝘴𝘪𝘨𝘯 𝘢𝘯𝘥 𝘰𝘱𝘵𝘪𝘮𝘪𝘻𝘢𝘵𝘪𝘰𝘯 𝘰𝘧 𝘰𝘵𝘩𝘦𝘳 𝘴𝘺𝘴𝘵𝘦𝘮𝘴. For LSPD of industrial high-performance processors, I propose a novel collaborative recommender system approach that learns hidden features from the interactions (CAD runs) of many \textit{users} (systems) and \textit{items} (knob configurations). To cope with the curse of dimensionality, the item features are decomposed into features of item attributes (knobs). The combined model predicts QoR for each user-item pair. For HLS of application-specific accelerators, I present a series of neural network models in the order of evolution towards the proposed mixed-sharing \textit{transfer learning} model. Transfer learning aims at leveraging knowledge gained from previous problems; however, due to the system and knob heterogeneities, the model needs to distinguish which piece of that knowledge should be transferred. The proposed ML approaches aim to not only use experiential knowledge as designers do but also to ultimately assist designers by providing alternative insights and suggesting optimization possibilities for new systems. As an effort in this direction, I develop an AI-augmented DSE tool that exploits the aforementioned models and \textit{generates} recommended knob configurations for new target systems. Through this research, I investigate the potential of next-level AI-augmented DSE with the goal of promoting secure collaborative engineering in the CAD community without the need of sharing confidential information and intellectual properties.
289

Microstructure Analysis and Surface Planarization of Excimer-laser Annealed Si Thin Films

Yu, Miao January 2020 (has links)
The excimer-laser annealed (ELA) polycrystalline silicon (p-Si or polysilicon) thin film, which influences more than 100-billion-dollar display market, is the backplane material of the modern advanced LCD and OLED products. The microstructure (i.e. ELA microstructure) and surface morphology of an ELA p-Si thin film are the two main factors determining the material properties, and they significantly affect the performance of the subsequently fabricated thin film transistors (TFTs). The microstructure is the result of a rather complex crystallization process during the ELA which is characterized as far-from-equilibrium, multiple-pulse-per-area and processing-parameter dependent. Studies of the ELA microstructure and the surface morphology closely related to the device performance as well as the microstructure evolution during the ELA process are long-termly demanded by both the scientific research and the industrial applications, but unfortunately have not been thoroughly performed in the past. The main device-performance-related characteristics of the ELA microstructure are generally considered to be the grain size and the presence of the dense grain boundaries. In the work of this thesis, an image-processing-based program (referred to as the GB extraction program) is developed to extract the grain boundary map (GB map) out of the transmission electron microscope (TEM) images of the ELA microstructure. The grain sizes are straightforwardly calculated from the GB map and statistically analyzed. More importantly, based on the GB maps, we propose and perform a rigorous scheme that we call the local-microstructure analysis (LMA) to quantitatively and systematically analyze the spatial distribution of the grain boundaries. The “local area” is mainly defined by the geometry and the location of a TFT. The successful extraction of the GB map and the subsequent LMA are permitted by our unique TEM skills to produce high-resolution TEM micrographs containing statistically significant number of grains for sensible quantitative analysis. The LMA unprecedentedly enables quantitative and rigorous analysis of spatial characteristics of the microstructure, especially the device geometry- and location-related characteristics. Additionally, we present and highlight the benefits of the LMA approach over the traditional statistical grain-size analysis of the ELA microstructure. From the grain-size analysis, we find that grain size across a statistically significant number of grains generally follows the same distribution as in the stochastic grain growth scenario at the beginning of the ELA process when the laser pulse (i.e. shot) number is small. As the shot number increases, the overall grain size monotonically increases while the distribution profile becomes broader. When the scan number reaches the ELA threshold (several tens of laser shots), the distribution profile substantially deviates from the stochastic profile and shows two sharp peaks in grain size around 300nm and 450nm, which is consistent with the previously proposed theory of energy coupling and nonuniform energy deposition during ELA. From the LMA, local nonuniformity of grain boundary density (GB density) at the device length scales and regions of high grain boundary periodicity are identified. More importantly, we find that the local nonuniformity is much more pronounced when p-Si film exhibits some level of spatial ordering, but less pronounced for a random grain arrangement. It is worth noting that the devices of different sizes and orientation have different sensitivity to the local nonuniformity of the ELA-generated p-Si thin film. In addition, based on the analysis results, the connection between the microstructure evolution and the partial melting and resolidification process of the Si film is discussed. Aside from the microstructure, the surface morphology of the ELA films, featuring pronounced surface protrusions, is characterized via an atomic force microscope (AFM). Attempts to planarize those surface protrusions detrimental to the subsequent device performance are conducted. In the attempts, the as-is (oxide-capped) ELA films and the BHF-treated ELA films are subjected to single shots of excimer irradiation. When the results are compared, an anisotropic melting phenomenon of the p-Si grains is identified, which appears to be strongly affected by the presence of the surface oxide capping layer. Conceptual models are developed and numerical simulations are employed to explain the observation of the anisotropic melting phenomenon and the effect of the surface oxide layer. Eventually, 41.8% reduction of root mean square (RMS) surface roughness is achieved for BHF-treated ELA films. The results gained in the systematic analysis of the ELA microstructure and the attempt of surface planarization further our understanding about (1) the device performance-related material microstructure of the ELA p-Si thin films, (2) the microstructure evolution occurring during multiple shots of the ELA process, and (3) the fundamental phase transformations in the far-from-equilibrium melt-mediated excimer-laser annealing processing of p-Si thin films. Such understanding could help engineers when designing the microelectronic devices and the ELA manufacturing process, as well as provide scientific researchers with insights on the melting and solidification of general polycrystalline materials, thus profoundly contributing to both the related scientific society and the technological community. The GB extraction program and the LMA scheme developed and demonstrated in the thesis, as another contribution to the related research filed, could also be generalized to the microstructural study of other polycrystalline materials where grain geometry and arrangement are of concern.
290

Intuitive Human-Machine Interfaces for Non-Anthropomorphic Robotic Hands

Meeker, Cassie January 2020 (has links)
As robots become more prevalent in our everyday lives, both in our workplaces and in our homes, it becomes increasingly likely that people who are not experts in robotics will be asked to interface with robotic devices. It is therefore important to develop robotic controls that are intuitive and easy for novices to use. Robotic hands, in particular, are very useful, but their high dimensionality makes creating intuitive human-machine interfaces for them complex. In this dissertation, we study the control of non-anthropomorphic robotic hands by non-roboticists in two contexts: collaborative manipulation and assistive robotics. In the field of collaborative manipulation, the human and the robot work side by side as independent agents. Teleoperation allows the human to assist the robot when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperator’s hand as an input device can provide an intuitive control method, but finding a mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands’ dissimilar kinematics. In this dissertation, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users. We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We first propose the general concept of the subspace, its properties and the variables needed to map from the human hand to a robot hand. We then propose three ways to populate the teleoperation subspace mapping. Two of our mappings use a dataglove to harvest information about the user's hand. We define the mapping between joint space and teleoperation subspace with an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and with an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. Our third mapping for the teleoperation subspace uses forearm electromyography (EMG) as a control input. Assistive orthotics is another area of robotics where human-machine interfaces are critical, since, in this field, the robot is attached to the hand of the human user. In this case, the goal is for the robot to assist the human with movements they would not otherwise be able to achieve. Orthotics can improve the quality of life of people who do not have full use of their hands. Human-machine interfaces for assistive hand orthotics that use EMG signals from the affected forearm as input are intuitive and repeated use can strengthen the muscles of the user's affected arm. In this dissertation, we seek to create an EMG based control for an orthotic device used by people who have had a stroke. We would like our control to enable functional motions when used in conjunction with a orthosis and to be robust to changes in the input signal. We propose a control for a wearable hand orthosis which uses an easy to don, commodity forearm EMG band. We develop an supervised algorithm to detect a user’s intent to open and close their hand, and pair this algorithm with a training protocol which makes our intent detection robust to changes in the input signal. We show that this algorithm, when used in conjunction with an orthosis over several weeks, can improve distal function in users. Additionally, we propose two semi-supervised intent detection algorithms designed to keep our control robust to changes in the input data while reducing the length and frequency of our training protocol.

Page generated in 0.1203 seconds