• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1412
  • 616
  • 202
  • 169
  • 106
  • 96
  • 84
  • 75
  • 62
  • 55
  • 32
  • 27
  • 20
  • 17
  • 10
  • Tagged with
  • 3392
  • 1665
  • 530
  • 350
  • 343
  • 324
  • 298
  • 297
  • 269
  • 243
  • 204
  • 192
  • 166
  • 162
  • 155
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
641

Commuting graphs for elements of order three in finite groups

Nawawi, Athirah Binti January 2013 (has links)
Let G be a finite group and X a subset of G. The commuting graph C(G,X) is the graph whose vertex set is X with two distinct elements of X joined by an edge whenever they commute in the group G. This thesis studies the structure of commuting graphs C(G,X) when G is either a symmetric group Sym(n) or a sporadic group McL, and X a conjugacy class for elements of order three. We describe how this graph can be useful in understanding various aspects of the structure of the group with a particular emphasis on the connectivity of the graph, the properties of the discs around some fixed vertex and the diameter of the graph.
642

Estimating the necessary sample size for a binomial proportion confidence interval with low success probabilities

Ahlers, Zachary January 1900 (has links)
Master of Science / Department of Statistics / Christopher Vahl / Among the most used statistical concepts and techniques, seen even in the most cursory of introductory courses, are the confidence interval, binomial distribution, and sample size estimation. This paper investigates a particular case of generating a confidence interval from a binomial experiment in the case where zero successes are expected. Several current methods of generating a binomial proportion confidence interval are examined by means of large-scale simulations and compared in order to determine an ad-hoc method for generating a confidence interval with coverage as close as possible to nominal while minimizing width. This is then used to construct a formula which allows for the estimation of a sample size necessary to obtain a sufficiently narrow confidence interval (with some predetermined probability of success) using the ad-hoc method given a prior estimate of the probability of success for a single trial. With this formula, binomial experiments could potentially be planned more efficiently, allowing researchers to plan only for the amount of precision they deem necessary, rather than trying to work with methods of producing confidence intervals that result in inefficient or, at worst, meaningless bounds.
643

A new stereo matching paradigm for the recovery of the third dimension in two-dimensional images

Candocia, Frank Martin 16 April 1993 (has links)
A new stereo matching paradigm is introduced as an integrated process of highly discriminating steps, adopting congruously all the fundamental steps of the stereo vision problem. The central objective is the extraction of a disparity map from which the depth map will be derived. A unique representation of the two dimensional (2-D) stereo images into linear, orthogonal, and spatially-varying attributes serve as the mathematical foundation from which the proposed stereo matching method has evolved. The devised attributes contribute equally to the decision making process and provide information on the characterization of a potential match and its validation through a consistency check. A fundamental contribution of this thesis is in creating the possibility for the design of a dimensionally-augmented vision system (2½ -D representation) based on an effective stereo paradigm with realistic computational requirements. In this design configuration, the geometrical mappings between the 3-D real-world measurements with the measurements obtained using the proposed 2½ -D-D representation are established. Computer results for the intended objective of creating highly accurate disparity maps for various scenes with varying complexities clearly demonstrate the soundness of the proposed method both in terms of its matching effectiveness and its realistic computational power requirements. Future objectives point to the development of enhanced algorithms for scene interpretation and understanding based on this augmented representation.
644

Atomic-scale and three-dimensional transmission electron microscopy of nanoparticle morphology

Leary, Rowan Kendall January 2015 (has links)
The burgeoning field of nanotechnology motivates comprehensive elucidation of nanoscale materials. This thesis addresses transmission electron microscope characterisation of nanoparticle morphology, concerning specifically the crystal- lographic status of novel intermetallic GaPd2 nanocatalysts and advancement of electron tomographic methods for high-fidelity three-dimensional analysis. Going beyond preceding analyses, high-resolution annular dark-field imaging is used to verify successful nano-sizing of the intermetallic compound GaPd2. It also reveals catalytically significant and crystallographically intriguing deviations from the bulk crystal structure. So-called ‘non-crystallographic’ five-fold twinned nanoparticles are observed, adding a new perspective in the long standing debate over how such morphologies may be achieved. The morphological complexity of the GaPd2 nanocatalysts, and many cognate nanoparticle systems, demands fully three-dimensional analysis. It is illustrated how image processing techniques applied to electron tomography reconstructions can facilitate more facile and objective quantitative analysis (‘nano-metrology’). However, the fidelity of the analysis is limited ultimately by artefacts in the tomographic reconstruction. Compressed sensing, a new sampling theory, asserts that many signals can be recovered from far fewer measurements than traditional theories dictate are necessary. Compressed sensing is applied here to electron tomographic reconstruction, and is shown to yield far higher fidelity reconstructions than conventional algorithms. Reconstruction from extremely limited data, more robust quantitative analysis and novel three-dimensional imaging are demon- strated, including the first three-dimensional imaging of localised surface plasmon resonances. Many aspects of transmission electron microscopy characterisation may be enhanced using a compressed sensing approach.
645

Patient-Specific Finite Element Modeling of the Mitral Valve

Andison, Christopher January 2015 (has links)
As the most commonly diseased heart valve, the mitral valve (MV) has been the subject of extensive research for many years. Unfortunately, the only treatment options currently available are surgical repair and replacement. Although repair is almost always preferable to replacement, it is often underperformed due to the complexity of MV repair surgeries. Consequently, there is significant interest in generating patient-specific finite element models of the MV for the purpose of simulating mitral repairs. For practical purposes transesophageal echocardiographic (TEE) images are most commonly used to reconstruct the mitral apparatus. However, limitations in ultrasound technology have prevented the detection of leaflet thicknesses. In the current study, a method was developed to accurately model variations in leaflet thicknesses using TEE datasets. Nine healthy datasets were modeled and the leaflet thicknesses were found to closely match previously reported results. As anticipated, normal valve function was also observed over the entire cardiac cycle.
646

State sum invariants of three manifolds

Newman-Gomez, Sharon Angela 01 January 1998 (has links)
No description available.
647

Non Destructive Testing for the Influence of Infill Pattern Geometry on Mechanical Stiffness of 3D Printing Materials

Unknown Date (has links)
This experiment investigated the effect of infill pattern shape on structural stiffness for 3D printed components made out of carbon fiber reinforced nylon. In order to determine the natural frequency of each specimen, nondestructive vibrational testing was conducted and processed using data acquisition software. After obtaining the acceleration information of each component, in response to ambient vibrational conditions and excitation, frequency response functions were generated. These functions provided the natural frequency of each component, making it possible to calculate their respective stiffness values. The four infill patterns investigated in this experiment were: Zig Zag, Tri-Hex, Triangle, and Concentric. Results of the experiment showed that changing the infill pattern of a 3D printed component, while maintaining a constant geometry and density, could increase mechanical stiffness properties by a factor of two. Comprehensively, the experiment showed that infill pattern geometry directly attributes to the mechanical stiffness of 3D printed components. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
648

The centralisation of government departments in Northern Province, 1994-1998.

Mukheli, Azwidowi January 1998 (has links)
Masters in Public Administration - MPA / This study is an investigation of how the policy of centralising government departments of the former homelands affected various stakeholders in the province. There is general concern from the people of this former homelands that there is poor service delivery in these areas since the creation of the new provincial government. In attempt to cover social, economic and, political impacts of centralisation, data were gathered through face to face interview, mailed questionnaires and, telephone interviews. The study concluded that there is a great need of devolving power to the former homelands which are now called regions in the province, as an attempt of bringing back service to where people are. In a calculated move to use the offices in the former homelands, the government may also remove the pietersburg components of government departments which are not critical to the functions of headquarters and place them in Venda, Gazankulu and, Lebowa.
649

Multidimensional Data Processing for Optical Coherence Tomography Imaging

McLean, James Patrick January 2021 (has links)
Optical Coherence Tomography (OCT) is a medical imaging technique which distinguishes itself by acquiring microscopic resolution images in-vivo at millimeter scale fields of view. The resulting in images are not only high-resolution, but often multi-dimensional to capture 3-D biological structures or temporal processes. The nature of multi-dimensional data presents a unique set of challenges to the OCT user that include acquiring, storing, and handling very large datasets, visualizing and understanding the data, and processing and analyzing the data. In this dissertation, three of these challenges are explored in depth: sub-resolution temporal analysis, 3-D modeling of fiber structures, and compressed sensing of large, multi-dimensional datasets. Exploration of these problems is followed by proposed solutions and demonstrations which rely on tools from multiple research areas including digital image filtering, image de-noising, and sparse representation theory. Combining approaches from these fields, advanced solutions were developed to produce new and groundbreaking results. High-resolution video data showing cilia motion in unprecedented detail and scale was produced. An image processing method was used to create the first 3-D fiber model of uterine tissue from OCT images. Finally, a compressed sensing approach was developed which we show to guarantee high accuracy image recovery of more complicated, clinically relevant, samples than had been previously demonstrated. The culmination of these methods represents a step forward in OCT image analysis, showing that these cutting edge tools can also be applied to OCT data and in the future be employed in a clinical setting.
650

Interactive, Computation Assisted Design Tools

Garg, Akash January 2020 (has links)
Realistic modeling, rendering, and animation of physical and virtual shapes have matured significantly over the last few decades. Yet, the creation and subsequent modeling of three-dimensional shapes remains a tedious task which requires not only artistic and creative talent, but also significant technical skill. The perfection witnessed in computer-generated feature films requires extensive manual processing and touch-ups. Every researcher working in graphics and related fields has likely experienced the difficulty of creating even a moderate-quality 3D model, whether based on a mental concept, a hand sketch, or inspirations from one or more photographs or existing 3D designs. This situation, frequently referred to as the content creation bottleneck, is arguably the major obstacle to making computer graphics as ubiquitous as it could be. Classical modeling techniques have primarily dealt with local or low-level geometric entities (e.g., points or triangles) and criteria (e.g., smoothness or detail preservation), lacking the freedom necessary to produce novel and creative content. A major unresolved challenge towards a new unhindered design paradigm is how to support the design process to create visually pleasing and yet functional objects by users who lack specialized skills and training. Most of the existing geometric modeling tools are intended either for use by experts (e.g., computer-aided design [CAD] systems) or for modeling objects whose visual aspects are the only consideration (e.g., computer graphics modeling systems). Furthermore, rapid prototyping, brought on by technological advances 3D printing has drastically altered production and consumption practices. These technologies empower individuals to design and produce original objects, customized according to their own needs. Thus, a new generation of design tools is needed to support both the creation of designs within the domain's constraints, that not only allows capturing the novice user's design intent but also meets the fabrication constraints such that the designs can be realized with minimal tweaking by experts. To fill this void, the premise of this thesis relies on the following two tenets: 1. users benefit from an interactive design environment that allows novice users to continuously explore a design space and immediately see the tradeoffs of their design choices. 2. the machine's processing power is used to assist and guide the user to maintain constraints imposed by the problem domain (e.g., fabrication/material constraints) as well as help the user in exploring feasible solutions close to their design intent. Finding the appropriate balance between interactive design tools and the computation needed for productive workflows is the problem addressed by this thesis. This thesis makes the following contributions: 1. We take a close look at thin shells--materials that have a thickness significantly smaller than other dimensions. Towards the goal of achieving interactive and controllable simulations we realize a particular geometric insight to develop an efficient bending model for the simulation of thin shells. Under isometric deformations (deformations that undergo little to no stretching), we can reduce the nonlinear bending energy into a cubic polynomial that has a linear Hessian. This linear Hessian can be further approximated with a constant one, providing significant speedups during simulation. We also build upon this simple bending model and show how orthotropic materials can be modeled and simulated efficiently. 2. We study the theory of Chebyshev nets--a geometric model of woven materials using a two-dimensional net composed of inextensible yarns. The theory of Chebyshev nets sheds some light on their limitations in globally covering a target surface. As it turns out, Chebyshev nets are a good geometric model for wire meshes, free-form surfaces composed of woven wires arranged in a regular grid. In the context of designing sculptures with wire mesh, we rely on the mathematical theory laid out by Hazzidakis~\cite{Hazzidakis1879} to determine an artistically driven workflow for approximately covering a target surface with a wire mesh, while globally maintaining material and fabrication constraints. This alleviates the user from worrying about feasibility and allows focus on design. 3. Finally, we present a practical design tool for the design and exploration of reconfigurables, defined as an object or collection of objects whose transformation between various states defines its functionality or aesthetic appeal (e.g., a mechanical assembly composed of interlocking pieces, a transforming folding bicycle, or a space-saving arrangement of apartment furniture). A novel space-time collision detection and response technique is presented that can be used to create an interactive workflow for managing and designing objects with various states. This work also considers a graph-based timeline during the design process instead of the traditional linear timeline and shows its many benefits as well as challenges for the design of reconfigurables.

Page generated in 0.0626 seconds