• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 700
  • 177
  • 135
  • 79
  • 65
  • 48
  • 25
  • 23
  • 17
  • 11
  • 11
  • 10
  • 8
  • 8
  • 8
  • Tagged with
  • 1571
  • 773
  • 245
  • 227
  • 203
  • 183
  • 162
  • 147
  • 141
  • 125
  • 114
  • 111
  • 101
  • 96
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Complementarities /

Dupor, Bill. January 1997 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Economics, June 1997. / Includes bibliographical references. Also available on the Internet.
102

Lebenszyklusanalyse fossiler, nuklearer und regenerativer Stromerzeugungstechniken

Marheineke, Torsten. January 2002 (has links) (PDF)
Universiẗat, Diss., 2002--Stuttgart.
103

Arranging simple neural networks to solve complex classification problems

Ghaderi, Reza January 2000 (has links)
In "decomposition/reconstruction" strategy, we can solve a complex problem by 1) decomposing the problem into simpler sub-problems, 2) solving sub-problems with simpler systems (sub-systems) and 3) combining the results of sub-systems to solve the original problem. In a classification task we may have "label complexity" which is due to high number of possible classes, "function complexity" which means the existence of complex input-output relationship, and "input complexity" which is due to requirement of a huge feature set to represent patterns. Error Correcting Output Code (ECOC) is a technique to reduce the label complexity in which a multi-class problem will be decomposed into a set of binary sub-problems, based oil the sequence of "0"s and "1"s of the columns of a decomposition (code) matrix. Then a given pattern can be assigned to the class having minimum distance to the results of sub-problems. The lack of knowledge about the relationship between distance measurement and class score (like posterior probabilities) has caused some essential shortcomings to answering questions about "source of effectiveness", "error analysis", " code selecting ", and " alternative reconstruction methods" in previous works. Proposing a theoretical framework in this thesis to specify this relationship, our main contributions in this subject are to: 1) explain the theoretical reasons for code selection conditions 2) suggest new conditions for code generation (equidistance code)which minimise reconstruction error and address a search technique for code selection 3) provide an analysis to show the effect of different kinds of error on final performance 4) suggest a novel combining method to reduce the effect of code word selection in non-optimum codes 5) suggest novel reconstruction frameworks to combine the component outputs. Some experiments on artificial and real benchmarks demonstrate significant improvement achieved in multi-class problems when simple feed forward neural networks are arranged based on suggested framework To solve the problem of function complexity we considered AdaBoost, as a technique which can be fused with ECOC to overcome its shortcoming for binary problems. And to handle the problems of huge feature sets, we have suggested a multi-net structure with local back propagation. To demonstrate these improvements on realistic problems a face recognition application is considered. Key words: decomposition/ reconstruction, reconstruction error, error correcting output codes, bias-variance decomposition.
104

Expressive Input

McIntyre, James January 2016 (has links)
Expressive input is the culmination of 18 weeks of prototyping, ideation and research conducted as my degree project at Umeå Institute of Design. The project presents three design provocations which aim to raise questions about the potential opportunity to create a dialogue with the physical controls we interact with.  While words like “smart” or “connected” get thrown around quite often, this work aims to show that there is a role for expression within the relationship we have with our devices.  Expression within this context is defined as how we can make user interfaces that leverage the advances in sensors and feedback in order to feel more human.  The work presents three scenarios that might exist within an Automotive context, and demonstrates solutions that encourage users to maintain visual attention on the task of driving.  The project was conducted by running a series of short sprints that were focused on specific problems, the intention of this approach was to identify unique opportunities for future design work to explore.
105

Consensus control for multi-agent sytems with input delay

Wang, Chunyan January 2016 (has links)
This thesis applies predictor-based methods for the distributed consensus control of multi-agent systems with input delay. "Multi-agent systems" is a term used to describe a group of agents which are connected together to achieve specified control tasks over a communication network. In many applications, the subsystems or agents are required to reach an agreement upon certain quantities of interest, which is referred to as "consensus control". This input delay may represent delays in the network communication. The main contribution of this thesis is to provide feasible methods to deal with the consensus control for general multi-agent systems with input delay. The consensus control for general linear multi-agent systems with parameter uncertainties and input delay is first investigated under directed network connection. Artstein reduction method is applied to deal with the input delay. By transforming the Laplacian matrix into the real Jordan form, delay-dependent conditions are derived to guarantee the robust consensus control for uncertain multi-agent systems with input delay. Then, the results are extended to a class of Lipschitz nonlinear multi-agent systems and the impacts of Lipschitz nonlinearity and input delay in consensus control are investigated. By using tools from control theory and graph theory, sufficient conditions based on the Lipschitz constant are identified for proposed protocols to tackle the nonlinear terms in the system dynamics. Other than the time delay, external disturbances are inevitable in various practical systems including the multi-agent systems. The consensus disturbance rejection problems are investigated. For linear multi-agent systems with bounded external disturbances, Truncated Predictor Feedback (TPF) approach is applied to deal with the input delay and the H_infinity consensus analysis is put in the framework of Lyapunov analysis. Sufficient conditions are derived to guarantee the H_infinity consensus in time domain. Some disturbances in real engineering problems have inherent characteristics such as harmonics and unknown constant load. For those kinds of disturbances in Lipschitz nonlinear multi-agent systems with input delay, Disturbance Observer-Based Control (DOBC) technique is applied to design the disturbance observers. A new predictor-based control scheme is constructed for each agent by utilizing the estimate of the disturbance and the prediction of the relative state information. Sufficient delay-dependent conditions are derived to guarantee consensus with disturbance rejection.
106

Virtual reality and input devices: The habit of gaming

Lundmark, Simon January 2017 (has links)
With the huge rise in popularity for Virtual Reality headsets, the market has become a bit of a wild-west situation where the technology is being explored for strengths, weaknesses and possible uses. Though, VR headsets have also opened up the possibility to explore and use alternate input and output devices to give a more realistic feeling. The boom has also opened up the doors for the use of Virtual Reality within education. The purpose of this paper is to investigate whether there is a difference between people that play video games and people who don’t when using Virtual Reality. This thesis was tested in a five minute experience using Unreal Engine 4. As hardware, the HTC Vive and Leap Motion were used. / Med den stora ökningen i popularitet för Virtual Reality headsets har marknaden blivit lite av en vilda västern situation där teknologin utforskas för att hitta styrkor, svagheter och användningsområden. VR headsets har också öppnat upp för möjligheten att utforska och använda alternativa gränssnitt för att ge en mer realistisk känsla. Ökningen har också öppnat dörrar för att använda Virtual Reality inom utbildning. Målet med denna uppsats är att utreda om det finns en skillnad mellan folk som spelar datorspel och folk som inte gör det när Virtual Reality används. Denna undersökning gjordes i en fem minuters lång upplevelse skapad med hjälp av Unreal Engine 4. Som hårdvara användes HTC Vive och Leap Motion.
107

National survey of early hearing detection and intervention in the private health care sector

Meyer, Miriam Elsa 03 December 2012 (has links)
Dissertation (MCommunication Pathology)--University of Pretoria, 2013. / Speech-Language Pathology and Audiology / Unrestricted
108

A computer visual-input system for the automatic recognition of blood cells

Cossalter, John George January 1970 (has links)
A computer visual-input system was built for the purpose of studying the classification of leukocytes. It consisted of an image dissector camera interfaced directly to a D.E.C. PDP-9 computer; a display of the image field was also provided, using a monitoring scope. The design and hardware arrangement of the system is briefly described, while detailed diagrams of the logic networks are shown in Appendix II. Photomicrographs of neutrophils were used as a pattern set, in a study of the computer classification of cell age and lobularity. Clustering of feature vectors was noted in a two-dimensional measurement space showing that metamyelocyte, banded and segmented cells can be distinguished. A square contour-trace of the neutrophil nuclei was performed and an area operator pre-processed the shape of a nucleus into a curvature function. Peaks in this curvature function, a measure of lobularity, as well as the ratio of the perimeter to square root of nuclear area, a measure of the irregularity in the nuclear boundary, were used as orientation and size-independent features. The area operator was found to be unsuitable for extracting curvature from leukocyte images. In cases of extreme nuclear curvature and nuclear filamentation, the basic formulations of the operator were violated giving an erroneous measure of curvature. The general form of the frequency spectrum of the video signal from the image dissector camera was derived. The signal bandwidth requirements and the camera resolution were found experimentally. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
109

A GPU based X-Engine for the MeerKAT Radio Telescope

Callanan, Gareth Mitchell January 2020 (has links)
The correlator is a key component of the digital backend of a modern radio telescope array. The 64 antenna MeerKAT telescope has an FX architecture correlator consisting of 64 F-Engines and 256 X-Engines. These F- and X-Engines are all hosted on 128 custom designed FPGA processing boards. This custom board is known as a SKARAB. One SKARAB X-Engine board hosts four logical X-Engines. This SKARAB ingests data at 27.2 Gbps over a 40 GbE connection. It correlates this data in real time. GPU technology has improved significantly since SKARAB was designed. GPUs are now becoming viable alternatives to FPGAs in high performance streaming applications. The objective of this dissertation is to investigate how to build a GPU drop-in replacement X-Engine for MeerKAT and to compare this implementation to a SKARAB X-Engine. This includes the construction and analysis of a prototype GPU X-Engine. The 40 GbE ingest, GPU correlation algorithm and the software pipeline framework that links these two together were identified as the three main sub-systems to focus on in this dissertation. A number of different tools implementing these sub-systems were examined with the most suitable ones being chosen for the prototype. A prototype dual socket system was built that could process the equivalent of two SKARABs worth of X-Engine data. This prototype has two 40 GbE Mellanox NICS running the SPEAD2 library and a single Nvidia GeForce 1080Ti GPU running the xGPU library. A custom pipeline framework built on top of the Intel Threaded Building Blocks (TBB) library was designed to facilitate the ow of data between these sub-systems. The prototype system was compared to two SKARABs. For an equivalent amount of processing, the GPU X-Engine cost R143 000 while the two SKARABs cost R490 000. The power consumption of the GPU X-Engine was more than twice that of the SKARABs (400W compared 180W), while only requiring half as much rack space. GPUs as X-Engines were found to be more suitable than FPGAs when cost and density are the main priorities. When power consumption is the priority, then FPGAs should be used. When running eight logical X-Engines, 85% of the prototype's CPU cores were used while only 75% of the GPU's compute capacity was utilised. The main bottleneck on the GPU X-Engine was on the CPU side of the server. This report suggests that the next iteration of the system should offload some CPU side processing to the GPU and double the number of 40 GbE ports. This could potentially double the system throughput. When considering methods to improve this system, an FPGA/GPU hybrid X-Engine concept was developed that would combine the power saving advantage of FPGAs and the low cost to compute ratio of GPUs.
110

Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

Alfadly, Modar 12 April 2018 (has links)
Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

Page generated in 0.024 seconds