• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129634
  • 18589
  • 11224
  • 8080
  • 6978
  • 6978
  • 6978
  • 6978
  • 6978
  • 6952
  • 5592
  • 2331
  • 1457
  • 1297
  • 528
  • Tagged with
  • 217552
  • 40703
  • 33489
  • 30135
  • 28875
  • 25746
  • 22543
  • 19196
  • 16728
  • 16188
  • 16104
  • 13247
  • 12987
  • 12960
  • 12815
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

The effect of illumination on the speed and efficiency of workmen

Chesebro, J. Kelsey 01 January 1917 (has links)
No description available.
532

Regenerative braking on the Cedar Rapids-Iowa City interurban

Gilchrist, Myrl C. 01 January 1917 (has links)
No description available.
533

ENHANCED DISPERSIVE MIXING AND MORPHOLOGY OPTIMIZATION OF POLYAMIDE-BASED BLENDS IN EXTRUSION VIA EXTENSION-DOMINATED MIXING

Chen, Hao January 2020 (has links)
No description available.
534

Multi-level requirement model and its implementation for medical device

Wang, Hua 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Requirements determine the expectations for a new or modified product. Requirements engineering involves defining, documentation and maintenance of requirements. The rapid improving of technologies and changing of market needs require a shorter time to market and more diversified products. As an important and complex task in product development, it is a huge work to develop new requirements for each new product from scratch. The reusability of requirements data becomes more and more important. However, with the current “copy and paste” approach, engineers have to go through the entire set of requirements (sometimes even more than one set of requirements) to identify the ones which need to be reused or updated. It takes a lot of time and highly relies on the engineers’ experiences. Software tools can only make it easier to capture and locate the requirements, but won’t be able to solve the problem of effective reuse of the existing requirement data. The overall goal of this research is to develop a new model to improve the management of requirements and make the reuse and reconfiguration of existing requirements and requirement models more efficient. Considering the requirements data as an important part of the knowledge body of companies, we followed the knowledge categorization method to classify requirements into groups, which were called levels in the study, based on their changing frequency. There are four levels, the regulatory level, the product line level, the product level and the project level. The regulatory level is the most stable level. Requirements in this level were derived from government and industry regulations. The product line level contains the common requirements for a group of products, the product line. The third level, product level, refers to the specific requirements of the product. And the fourth and most dynamic level, the project level, is about the specific configurations of a product for a project. We chose auto-injector as the application to implement the model, since it is a relatively simple product, but its requirements cover many different categories. There are three major steps in our research approach for the project. The first is to develop requirements and classify them for our model. The development of requirements adopts the goal-oriented model to analyze and SysML, a system modeling language, to build requirements model. And the second step is to build requirements template, connecting the solution of the problem to the information system, standalone requirements management tool or information platform. This step is to find a way to realize the multi-level model in an information system. The final step is to implement the model. We chose two software tools for the implementation, Microsoft Office Excel, a commonly used tool for generating requirements documents, and Siemens PLM suite, Teamcenter, a world leading PLM platform with a requirement module. The results in the study include an auto-injector requirement set, a workflow for using the multi-level model, two requirements templates for implementation of the model in two different software tools, and two automatically generated requirement reports. Our model helps to define the changed part of requirements after analysis of the product change. It could avoid the pitfalls of the current way in reusing requirements. Based on the results from this study, we can draw the following conclusions. A practical multi-level requirements management model can be used for a medical device—the auto-injector; and the model can be implemented into different software tools to support reuse of existing requirement data in creating requirement models for new product development projects. Furthermore, the workflow and guideline to support the application and maintenance of the requirement model can be successful developed and implemented. Requirement documents/reports can be automatically generated through the software tool by following the workflow. And according to our assessment, the multi-level model can improve the reusability of requirements.
535

SEMIGLOBAL CONTROL OF TIME-DELAY NONLINEAR SYSTEMS BY MEMORYLESS FEEDBACK

Wang, Yuanjiu 01 September 2021 (has links)
No description available.
536

Deep learning for large-scale holographic 3D particle localization and two-photon angiography segmentation

Tahir, Waleed 27 September 2021 (has links)
Digital inline holography (DIH) is a popular imaging technique, in which an unknown 3D object can be estimated from a single 2D intensity measurement, also known as the hologram. One well known application of DIH is 3D particle localization, which has found numerous use cases in the areas of biological sample characterization and optical measurement. Traditional techniques for DIH rely on linear models of light scattering, which only account for single scattering of light and completely ignore the multiple scattering among scatterers. This assumption of linear models becomes inaccurate under high particle densities and refractive index contrasts. Incorporating multiple scattering into the estimation process has shown to improve reconstruction accuracy in numerous imaging modalities. However, existing multiple scattering solvers become computationally prohibitive for large-scale problems comprising of millions of voxels within the scattering volume. This thesis addresses this limitation by introducing computationally efficient frameworks that are able to effectively account for multiple scattering in the reconstruction process for large-scale 3D data. We demonstrate the effectiveness of the proposed schemes on a DIH setup for 3D particle localization and show that incorporating multiple scattering significantly improves the localization performance compared to traditional single scattering based approaches. First, we discuss a scheme in which multiple scattering is computed using the iterative Born approximation by dividing the 3D volume into discrete 2D slices, and computing the scattering among them. This method makes it feasible to compute multiple scattering for large volumes and significantly improves 3D particle localization compared to traditional methods. One limitation of the aforementioned method is that the multiple scattering computations were unable to converge when the sample under consideration was strongly scattering. This limitation stemmed from the method's dependence on the iterative Born approximation, which assumes the samples to be weakly scattering. This challenge is addressed in our following work, where we incorporate an alternative multiple scattering model that is able to effectively account for strongly scattering samples without irregular convergence properties. We demonstrate the improvement of the proposed method over linear scattering models for 3D particle localization, and statistically show that it is able to accurately model the hologram formation process. Following this work, we address an outstanding challenge faced by many imaging applications, related to descattering, or removal of scattering artifacts. While deep neural networks (DNNs) have become the state-of-the-art for descattering in many imaging modalities, generally multiple DNNs have to be trained for this purpose if the range of scattering artifact levels is very broad. This is because for optimal descattering performance, it has been shown that each network has to be specialized for a narrow range of scattering artifact levels. We address this challenge by presenting a novel DNN framework that is able to dynamically adapt its network parameters to the level of scattering artifacts at the input, and demonstrate optimal descattering performance without the need of training multiple DNNs. We demonstrate our technique on a DIH setup for 3D particle localization, and show that even when trained on purely simulated data, the networks is able to demosntrate improved localization on both simulated and experimental data, compared to existing methods. Finally, we consider the problem of 3D segmentation and localization of blood vessels from large-scale two-photon microscopy (2PM) angiograms of the mouse brain. 2PM is a widely adapted imaging modality for 3D neuroimaging. The localization of brain vasculature from 2PM angiograms, and its subsequent mathematical modeling, has broad implications in the fields of disease diagnosis and drug development. Vascular segmentation is generally the first step in the localization process, in which blood vessels are separated from the background. Due to the rapid decay in the 2PM signal quality with increasing imaging depth, the segmentation and localization of blood vessels from 2PM angiograms remains problematic, especially for deep vasculature. In this work, we introduce a high throughput DNN, with a semi-supervised loss function, which not only is able to localize much deeper vasculature compared to existing methods, but also does so with greater accuracy and speed.
537

A deep learning-based approach towards automating visual reinforced concrete bridge inspections

Dube, Bright N 27 January 2022 (has links)
Visual inspections are fundamental to the maintenance of RC bridge infrastructure. However, their highly subjective nature often compromises the accuracy of inspection results and ultimately leads to inaccurate prioritisation of repair and rehabilitation activities. Visual inspections are also known to expose inspectors to height and trafficrelated hazards, and sometimes require the use of costly access equipment. Therefore, the present study investigated state-of-the-art Unmanned Aerial Vehicles (UAVs) and algorithms capable of automating visual RC bridge inspections in order to reduce inspector subjectivity, minimise inspection costs and enhance inspector safety. Convolutional neural network (CNN) algorithms are state-of-the-art in relation to the automatic detection of RC bridge defects. However, much of the prior research in this area focused on detecting the presence of defects and gave little to no attention to characterizing them according to defect type and degree (D) or extent (E) ratings. Four proof-of-concept CNN models were therefore developed, namely a defect-type detector, crack-type detector, exposed-rebar detector and a shrinkage crack D-rating model. Each model was built by first compiling defect images, labelling them according to defect/crack type and creating training and test sets at a 90-10% split. The training sets were then used to train the CNN models through transfer learning and fine-tuning using the fastai deep learning python library. The performance of each model was ultimately evaluated based on prediction accuracies on the test sets and their robustness to noise. Test accuracies ≥ 87% were attained by the trained models. This result shows that CNNs are capable of accurately identifying RC bridge corrosion, spalling, ASR, cracking and efflorescence, and assigning appropriate D ratings to shrinkage cracks. It was concluded that CNN models can be built to identify and allocate D and E ratings to any visible defect type, provided the requisite training data that sufficiently represents noisy real-world inspection conditions can be acquired. This formed the basis upon which a practical framework for UAV-enabled and deep learning-based RC bridge inspections was developed.
538

Segmentation of complex scenes and object classification using neural networks

Miller, Mark G. 01 January 1994 (has links)
The focus of this thesis is on the emerging technology known as Neural Networks which has recently become quite successful in many forms of pattern recognition applications. A software implementation of a neural network based system and an overview of this form of artificial intelligence will be presented in this paper. The system will identify geometric patterns embedded in a complex scene by first segmenting the image and then identifying the objects through a neural network. Automatic positioning mechanisms and military target recognition are two examples of applications that could use the technology outlined in this paper. A computer generated image containing multiple sub-images of eight basic shapes is used as the input to the segmentation-classification system. The images are corrupted by large amounts of gaussian noise and various shading patterns to closely simulate pictures taken by a camera. A region growing algorithm, which incorporates the use of histograms and neighborhood filters, is used to segment the complex scenes. Prior intensity or object position information is not required by the segmentation algorithm to fully dissect the image. Additionally, variation in object location, rotation, and size do not affect the ultimate classification solutions. A special set of moments is then calculated for each individual object. These moment features are used as the inputs to a feed-forward neural network which is trained by the use of the back-propagation learning algorithm. Once trained, the network activates an appropriate output to identify the shape classification of the applied input. This overall system, which includes segmentation, feature extraction, and neural network implementation, achieves a high level of classification accuracy. The methodology used in developing this segmentation algorithm does not require prior knowledge of the application, and the invariant features utilized in the neural network classification make the system readily transferable to other applications. Only basic geometric patterns will be classified by my segmentation and classification system; however, more complex shapes, such as text, could also be classified with minimal changes to the software. There are essentially no limitations to the two-dimensional shapes that can be recognized by this method; although, additional preprocessing may be required to recognize a hexagon from a circle, for example.
539

Synthesis and Properties of Van der Waals-bonded Semiconductor Heterojunctions with Gallium Nitride

Lee, Choong hee January 2018 (has links)
No description available.
540

The Design and Analysis of a Low-Cost Combustion Analysis System

Steinbrunner, Brian D. January 2018 (has links)
No description available.

Page generated in 0.1554 seconds