171 |
Association Learning Via Deep Neural NetworksLandeen, Trevor J. 01 May 2018 (has links)
Deep learning has been making headlines in recent years and is often portrayed as an emerging technology on a meteoric rise towards fully sentient artificial intelligence. In reality, deep learning is the most recent renaissance of a 70 year old technology and is far from possessing true intelligence. The renewed interest is motivated by recent successes in challenging problems, the accessibility made possible by hardware developments, and dataset availability.
The predecessor to deep learning, commonly known as the artificial neural network, is a computational network setup to mimic the biological neural structure found in brains. However, unlike human brains, artificial neural networks, in most cases cannot make inferences from one problem to another. As a result, developing an artificial neural network requires a large number of examples of desired behavior for a specific problem. Furthermore, developing an artificial neural network capable of solving the problem can take days, or even weeks, of computations.
Two specific problems addressed in this dissertation are both input association problems. One problem challenges a neural network to identify overlapping regions in images and is used to evaluate the ability of a neural network to learn associations between inputs of similar types. The other problem asks a neural network to identify which observed wireless signals originated from observed potential sources and is used to assess the ability of a neural network to learn associations between inputs of different types.
The neural network solutions to both problems introduced, discussed, and evaluated in this dissertation demonstrate deep learning’s applicability to problems which have previously attracted little attention.
|
172 |
Input Substitution in the Coal-Fired Electric Power IndustryFatoorehchie, Mohammad 01 May 1979 (has links)
A gradual increase in the price of oil, a decline in the supply of gas, and a lag in nuclear construction leaves coal (a potential major resource for the future energy needs) as a fuel in ample supply. The major portion of the United States' electricity is generated by steam-driven generators where steam is produced by fossil fuel-fired boilers. In 1978, 47 percent of the total electricity generation was fueled by coal, up from 43 percent in 1975. Use of coal in generation of electricity has spawned numerous research projects concerning the economics of the coal-fired electric power industry.
The majority of the empirical works employed estimates of cost or production functions derived from the traditional strong separable functions (i.e., Cobb-Douglas or Constant Elasticity Substitution models). In the case of multiple-output, multiple-input models, constancy of elasticity of substitution proves to be highly restrictive. Limitations of conventional models have motivated the use of more general models, specifically the transcendental logarithmic function which imposes no separability restriction a priori.
Absence of new empirical studies for the industry, provides sufficient justification for the empirical study of the economic relationship between inputs and outputs in the coal-fired electric power industry. Also absent in previous works is the element of machine mix and air pollution control factors. The analysis of substitution possibilities between inputs and the existence of a technological change from the objectives of the present study. Substitution and price demand elasticities are estimated which provide guidelines and useful information for planning and design of optimally more efficient coal-fired power plants. These estimated elasticities can be used to analyze the impacts of some selected government or industry policies, or they can provide guidance in further policy development and research.
A transcendental logarithmic multiple-input, multiple-output cost function is adapted to the cross-section data of the coal-fired electric power industry for 1973 at the plant level. The maximum-likelihood ratio test is used to empirically test the validity of various restrictions on the productive structure. The model used in this study provides for a share-specific elasticity to be computed for each price and share observation.
Results drawn from this study suggest that models with constant elasticity of substitution (i.e., Cobb-Douglas, and the Constant Elasticity Substitution and Separable models) do not appropriately represent the structure of the United States' coal-fired electric power industry. Although the empirical findings at the industry level provide substitution possibilities can be found for several vintages. Scale economies are present; and contrary to the findings for the power industry, it was found that the coal-fired power plants do not operate on the flat portion of the average cost curve.
|
173 |
Direct 3D Interaction Using A 2D Locator DeviceAnsari, Anees 01 July 2003 (has links)
Traditionally direct 3D interaction has always been limited to true 3D devices whereas 2D devices have always been used to achieve indirect 3D interaction. Till date no proper research has been done to try and extend the use of mouse to direct 3D interaction. In this research we explore the issues involved with using the mouse to accommodate the additional degrees of freedom required for 3D interaction. We put forth a unique and innovative design to achieve this objective and show that even a device as simple as the mouse can be highly effective for 3D interaction when supported by an appropriate underlying design. We also discuss in detail a software prototype "Direct3D" that we have developed based on our design and hope to take a step towards making direct 3D interaction easy, inexpensive and available to all computer users.
|
174 |
Machine Vision as the Primary Sensory Input for Mobile, Autonomous RobotsLovell, Nathan, N/A January 2006 (has links)
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
|
175 |
Utopia unrealised: an evaluation of a consultancy to develop a national framework for police education and training to enhance frontline response to illicit drug problems in AustraliaConway, Jane Frances January 2004 (has links)
This dissertation presents an evaluation of a funded consultancy that was intended to bring about change in the education and training of police in Australia in response to illicit drugs. Sponsored by what was at the time known as the Commonwealth Department of Health and Aged Care, the ultimate goal of the consultancy was a national framework for police education and training to enhance frontline police response to illicit drug problems. The research used a case study design. Guba and Stufflebeam’s (1970) Context, Input, Process, and Product (CIPP) model was used to organise the presentation of a rich description of the design, development and implementation of the consultancy. Application of this framework enabled illumination of a number of issues related to social policy, change and innovation, and quality improvement processes. The study explores the role of education and training in organisational change and concludes that the potential of external consultancy activity to effect meaningful change in police education, training and practice is limited by a number of factors. Key findings of the study are that while a number of consultancy processes could have been enhanced, the primary determinants of the extent to which a change in police education and training will enhance frontline practice are contextual and conceptual factors. The study reveals that the response of frontline police to illicit drug use is influenced by multivariate factors. The findings of this study suggest that while frontline police are keen to provide solutions to a range of practice issues in response to illicit drug problems, they desire concrete strategies that are well defined and supported by management, consistent with policy and within the law. However, the complexity of police activity in response to illicit drugs; the dissonance between the conceptual frameworks of police and health agencies; and, resistance to what is perceived as externally initiated change in police practice, education and training; were found to be powerful inhibitors of an utopian attempt to enhance frontline police response to illicit drug problems. Using the metaphor of board games, the study concludes that the development of an education and training framework will be of little value in achieving enhanced frontline practice in response to illicit drug problems unless the criteria for enhanced response are made more explicit and seen to be congruent with both the conceptualisation and operationalisation of police roles and functions. Moreover, the study questions the mechanisms through which changes in policy are conceived, implemented and evaluated and highlights a need for greater congruence between evaluation frameworks and the nature of change.
|
176 |
Reversible binary counter and shaft position indicatorJanuary 1947 (has links)
by H.P. Stabler. / "March 3, 1947." / Includes bibliographical references. / Army Signal Corps Contract No. W-36-039 sc-32037.
|
177 |
noneHu, Chih-chiang 11 August 2007 (has links)
In decades, the growth of the productivities in the National Income based
on the developments of information economy in their countries. Besides the
high-tech industries and the Information and Communication Technologies
(ICTs), the information related industries contributed the growth of Nation
Income. This study intended to measure the size and the structure of the
information economy in Taiwan. In order to recognize the trend and the
difference of the information economies among our numerous countries, we
choose Porat¡¦s (1977) studyas our framework. The proposes of this study list
below:
1. Measuring the size and the structure of the information economy in Taiwan.
2. Proposing to improve the methodology on measuring the information
economy, especially the parts about the data resource and the identification
of the information occupations in Taiwan.
3. Finding the difference on the time series between Taiwan and other
countries when we developed the information economy model and making
the policy suggestions on it.
Key Words: Information Economy, Input-Output Table, ICT, National
Income Accounts, Value-added
|
178 |
Sketch Recognition on Mobile DevicesLucchese, George 1987- 14 March 2013 (has links)
Sketch recognition allows computers to understand and model hand drawn sketches and diagrams. Traditionally sketch recognition systems required a pen based PC interface, but powerful mobile devices such as tablets and smartphones can provide a new platform for sketch recognition systems. We describe a new sketch recognition library, Strontium (SrL) that combines several existing sketch recognition libraries modified to run on both personal computers and on the Android platform. We analyzed the recognition speed and accuracy implications of performing low-level shape recognition on smartphones with touch screens. We found that there is a large gap in recognition speed on mobile devices between recognizing simple shapes and more complex ones, suggesting that mobile sketch interface designers limit the complexity of their sketch domains. We also found that a low sampling rate on mobile devices can affect recognition accuracy of complex and curved shapes. Despite this, we found no evidence to suggest that using a finger as an input implement leads to a decrease in simple shape recognition accuracy. These results show that the same geometric shape recognizers developed for pen applications can be used in mobile applications, provided that developers keep shape domains simple and ensure that input sampling rate is kept as high as possible.
|
179 |
Pros and Cons in Immersion : - A Study of a Swedish and Italian Exchange Project Focused on ImmersionSemb, Oscar January 2013 (has links)
Abstract This study examines the Comenius exchange project between Da Vinci, Kattegattgymnasiet in Sweden and Liceo Scientifico F. Vercelli in Asti, Italy from both a qualitative and a quantitative angle. This exchange project was working with immersion. The purpose of this essay is to investigate to what extent second language learning is achieved in an immersion project. The essay aims to answer the following thesis questions: What are the learning outcomes of this exchange project, focused on immersion? What are the advantages, and disadvantages of an exchange project, focused on immersion? To undertake this study, I travelled to Asti and distributed a quantitative questionnaire to the students in this project. Qualitative interviews were conducted with the two main teachers and the students, as well. The data was then processed and analyzed, along with my theoretical framework: second language acquisition theory and immersion. The results show that the Swedish students were better at speaking; the Italian teacher focused more on grammar, that the objectives sometimes were unclear and that language development occurred. The study also provides the data that tells us that there might be challenges with your colleagues and that it is time-consuming. For further research I suggest a focus on why language development features in a project like this. It would also be interesting to analyze a project like this by observations of the linguistic content of the lessons and the specific differences between Italy and Sweden. Key words: Immersion, Second language acquisition, input, exchange project, language instruction
|
180 |
Data Requirements for a Look-Ahead SystemHolma, Erik January 2007 (has links)
Look ahead cruise control deals with the concept of using recorded topographic road data combined with a GPS to control vehicle speed. The purpose of this is to save fuel without a change in travel time for a given road. This thesis explores the sensitivity of different disturbances for look ahead systems. Two different systems are investigated, one using a simple precalculated speed trajectory without feedback and the second based upon a model predictive control scheme with dynamic programming as optimizing algorithm. Defect input data like bad positioning, disturbed angle data, faults in mass estimation and wrong wheel radius are discussed in this thesis. Also some investigations of errors in the environmental model for the systems are done. Simulations over real road profiles with two different types of quantization of the road slope data are done. Results from quantization of the angle data in the system are important since quantization will be unavoidable in an implementation of a topographic road map. The results from the simulations shows that disturbance of the fictive road profiles used results in quite large deviations from the optimal case. For the recorded real road sections however the differences are close to zero. Finally conclusions of how large deviations from real world data a look ahead system can tolerate are drawn.
|
Page generated in 0.0261 seconds