• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 788
  • 206
  • 191
  • 79
  • 45
  • 35
  • 15
  • 11
  • 11
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1790
  • 335
  • 272
  • 243
  • 240
  • 166
  • 166
  • 133
  • 131
  • 123
  • 120
  • 118
  • 108
  • 107
  • 99
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

O desenvolvimento de um sistema de animação facial baseado em performance e no uso de câmera RGB-D / The development of a facial animation system based on performance and the use of a RGB-D camera

Silva, Carlos Eduardo Rossi Cubas da [UNESP] 02 February 2017 (has links)
Submitted by CARLOS EDUARDO ROSSI CUBAS DA SILVA null (carlos.cubas@gmail.com) on 2017-02-25T17:53:14Z No. of bitstreams: 1 Dissertação-versão-final.pdf: 44412755 bytes, checksum: 4a1da7985f55f9ba238e1107610b8840 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-03-07T13:42:27Z (GMT) No. of bitstreams: 1 silva_cerc_me_sjrp.pdf: 44412755 bytes, checksum: 4a1da7985f55f9ba238e1107610b8840 (MD5) / Made available in DSpace on 2017-03-07T13:42:27Z (GMT). No. of bitstreams: 1 silva_cerc_me_sjrp.pdf: 44412755 bytes, checksum: 4a1da7985f55f9ba238e1107610b8840 (MD5) Previous issue date: 2017-02-02 / Nas últimas décadas, o interesse quanto à captura de movimentos da face humana e à identificação de suas expressões com a finalidade de geração de animações faciais realistas, tem aumentado, tanto na comunidade científica quanto na indústria do entretenimento. A alta acurácia nesse processo é necessária, pois os humanos são treinados para identificar expressões faciais, detectando facilmente pequenas imperfeições na animação de uma face virtual. A animação facial baseada em performance é uma das técnicas utilizadas para gerar animações realistas principalmente em filmes. Com o surgimento de câmeras RGB-D de baixo custo, muitos sistemas de animação facial baseada em performance foram desenvolvidos, compartilhando muitos princípios fundamentais, mas com detalhes de implementação específicos. Estes sistemas consistem de uma fase de rastreamento de movimentos e identificação de expressões faciais seguido de um procedimento de redirecionamento de expressões. Neste sentido, ambientes modulares para a animação facial baseada em performance são extremamente úteis na incorporação de novos algoritmos de rastreamento com características de entrada e saídas padronizadas. Considerando o contexto exposto, o presente trabalho teve como principal objetivo a criação e validação de um ambiente com arquitetura modular para animação facial baseada em performance que utilizou uma câmera RGB-D para a captura dos movimentos faciais de um ator, bem como permitiu a incorporação dos principais algoritmos de rastreamento encontrados na literatura, visando o redirecionamento destes movimentos para uma face virtual humana diferente. As entradas e saídas deste ambiente foram padronizadas pelo uso de blendshapes (mistura de formas). / In recent decades, interest in capturing human face movements and identifying expressions for the purpose of generating realistic facial animations has increased in both the scientific community and the entertainment industry. The high accuracy in this process is necessary because humans are trained to identify facial expressions, easily detecting small imperfections in the animation of a virtual face. Performance-based facial animation is one of the techniques used to generate realistic animations especially in movies. With the emergence of low-cost RGB-D cameras, many performance-based facial animation systems have been developed, sharing many fundamental principles but with specific implementation details. These systems consist of a phase of tracking movements and identifying facial expressions followed by an expression redirection procedure. In this sense, modular environments for performance-based facial animation are extremely useful in incorporating new tracking algorithms with standardized input and output characteristics. Considering the above context, the main objective of this work was the creation and validation of an environment with modular architecture for performance-based facial animation that used an RGB-D camera to capture the facial movements of an actor as well as allowed the incorporation of the main algorithms found in the literary, aiming the redirection of these movements to a different human virtual face. The inputs and outputs of this environment were standardized by the use of blendshapes.
232

Capturing and Analyzing Network Traffic from Common Mobile Devices for Security and Privacy

Overton, Billy 01 May 2014 (has links)
Mobile devices such as tablets and smartphones are becoming more common, and they are holding more information. This includes private information such as contacts, financial data, and passwords. At the same time these devices have network capability with access to the Internet being a prime feature. Little research has been done in observing the network traffic produced by these mobile devices. To determine if private information was being transmitted without user knowledge, the mobile capture lab and a set of procedures have been created to observe, capture and analyze the network traffic produced by mobile devices. The effectiveness of the lab and procedures has been evaluated with the analysis of four common mobile devices. The data analyzed from the case studies indicates that, contrary to popular opinion, very little private information is transmitted in clear text by mobile devices without the user’s knowledge.
233

Fundamental insights into chemical looping combustion (CLC): a materials characterization approach to understanding mechanisms and size effects in oxygen carrier performance

Alalwan, Hayder Abdulkhaleq Khudhair 01 August 2018 (has links)
This work aims to develop fundamental insights about the underlying surface and bulk chemical processes instrumental to the efficiency of chemical looping combustion (CLC). CLC, which uses a solid-state oxygen carrier (e.g., metal oxides) to drive hydrocarbon combustion, is a promising combustion alternative that minimizes byproduct formation and facilities capture of CO2. In this work, we compare the performance of different transition metal oxides, namely iron, copper, cobalt, manganese, and nickel oxides, as oxygen carriers in CLC using CH4 as the reducing agent. Experiments used a continuous flow reactor across temperatures ranging from 500 to 800 oC and feed flowrates from 12.5 to 250 h-1. In addition to monitoring size-, temperature- and flow rate-dependent performance trends for CH4 conversion to CO2, microscopic and spectroscopic techniques were used to investigate the solid-state mechanism of oxygen carrier reduction and the coupled surface chemical and bulk material processes influencing performance. Bulk (XRD) and surface (XPS) analysis reveal that oxygen carrier reduction can be generally represented by two models, the unreacted shrinking core model (USCM) and the nuclei growth model (NNGM). The reduction of some metal oxides can also proceed via a two-stage solid-state mechanism; for example, hematite reduction to magnetite follows USCM, while the subsequent reductions of magnetite to wustite and wustite to iron metal follow NNGM. Furthermore, our results reveal that minimizing the particle size promotes oxygen carrier performance, but only for metal oxides reduced according to the USCM, where metal oxide reduction initiates on the particle surface. In contrast, no benefit of decreasing particle size was observed for materials reduced according to the NNGM because the reaction initiates in the particle bulk, such that a more critical determinant of reactivity may be the available oxygen carrier volume rather than surface area. Beyond these fundamental insights, cycling experiments were also performed to provide more practical information about the effect of oxygen carrier particle size on their long-term performance in CLC applications.
234

Task-specific learning supports control over visual distraction

Cosman, Joshua Daniel 01 May 2012 (has links)
There is more information in the visual environment than we can process at a given time, and as a result selective attention mechanisms have developed that allow us to focus on information that is relevant to us while ignoring information that is not. It is often assumed that our ability to overcome distraction by irrelevant information in the environment requires conscious, effortful processing, and traditional theories of selective attention have emphasized the role of an observer's explicit intentions in driving this control. At the same time, effortful control on the basis of explicit processes may be maladaptive when the behaviors to be executed are complex and dynamic, as is the case with many behaviors that we carry out on a daily basis. One way to increase the efficiency of this process would be to store information regarding past experiences with a distracting stimulus, and use this information to control distraction upon future encounters with that particular stimulus. The focus of the current thesis was to examine such a "learned control" view of distraction, where experience with particular stimuli is the critical factor determining whether or not a salient stimulus will capture attention and distract us in a given situation. In Chapters 2 through 4, I established a role for task-specific learning in the ability of observers to overcome attentional capture, showing that experience with particular attributes of distracting stimuli and the context in which the task was performed led to a predictable decrease in capture. In Chapter 5, I examined the neural basis of these learned control effects, and the results suggest that neocortical and medial temporal lobe learning mechanisms both contribute to the experience-dependent modulation of attentional capture observed in Chapters 2-4. Based on these results, a model of attentional capture was proposed in which experience with particular stimulus attributes and their context critically determine the ability of salient, task-irrelevant information to capture attention and cause distraction. I conclude that although explicit processes may play some role in this process under some conditions, much of our ability to overcome distraction results directly from past experience with the visual world.
235

Economic regulation in the taxicab industry: a case study of Iowa City, Iowa

Saponaro, Michael Anthony 01 December 2013 (has links)
This thesis quantitatively and qualitatively analyzes the economic regulations that govern taxicab firms in Iowa City, Iowa. Based upon a review of the relevant literature, an economic analysis of regulations and market power, and conducted interviews among taxicab owners and drivers, city staff and planners, and members of the general public, this paper will analyze the costs and implications of economic regulations and risks of regulatory capture, and identify improvements to existing ordinances and city codes. Current economic theory argues that economic regulations create both real and perceived entry barriers, and impose costs to producers and consumers. Additionally, these regulations stifles entrepreneurship and innovation, reduces driver pay, and in some instances leads to discrimination and encumbrance for the most vulnerable residents, recent immigrants and the car-less poor. On the question of whether economic regulations causes high concentrations of market power in U.S. cities, regression analysis of medium to large U.S. cities does not reveal a correlation between entry regulation and market power. Additionally, calculations of the Herfindahl (HH) Index for taxi firms in Iowa City yields a HH score of 0.103052, which is considered an un-concentrated market. However, while the quantitative data indicates that economic regulations do not cause an identifiable influence on market power, qualitative data gathered from stakeholder interviews reveal a burden in the form of unavoidable sunk costs for drivers, owners, and riders. These interviews reveal the "true" costs of regulations, as well as the perceived costs by policy makers, regulators, and the general public, who frequently underestimate the burden of regulation. This thesis further highlights how regulations arise in the policy-making process; and to what extend they stem from either anti-competitive interests between established firms, a lack of information among policymakers, or simply planners' failure to integrate taxis into a more comprehensive regional transportation system. Ultimately this thesis argues that some of Iowa City's taxicab regulations, particularly the liability insurance minimums for drivers, the terms of operation for dispatching, and the profiling of immigrant and small firm cabbies by ICPD are burdensome and unnecessary. Loosening of these restrictions would benefit small firm and drivers, the general consumer of taxi services, and compliment larger city planning goals in Iowa City, Iowa. Despite the costs and burdens, this thesis does not justify complete deregulation for Iowa City's transportation policy, particularly when case studies of such efforts have not always yielded positive benefits. Instead, this thesis advocates for "better regulation", to be enforced on a regional level, rather than at a municipal level.
236

Here to stay : the role of value creation, capture and exchange in limiting the liability of newness for new entrant museums.

Burton, Christine. January 2006 (has links)
This thesis examines the concepts of value creation, capture and exchange in limiting the liability of newness for nonprofit museums entering the sector. There has been considerable examination of cultural value in relation to museums. However, little is known about how value is created, captured and exchanged for stakeholders in new museums. It is posited that value creation, capture and exchange constitute a value cycle. Through this value cycle management in new museums detects and limits the liability of newness. The ability to detect and limit the liability of newness enables the continuation of the museum. If the liability of newness is not limited, it may mean that a new museum exits the sector or is transformed. The concept of a value cycle is derived from an examination of the nonprofit management literature, aspects of the for-profit management literature and the arts and museum management literature. Value creation is a key concept in the three literature areas. Value creation in this context, is specifically defined as the worth of the physical manifestation of the museum. It resides in the building and the collection, services and programs within the building. It is suggested that this value needs to be transformed and consumed by a range of stakeholders. The transformation of value creation is denoted as value capture. Value capture is the appeal of programs, projects and activities. Value capture includes how well the products and services align with particular stakeholders, how accountable the managers are to stakeholders and how products and services are consumed by stakeholders. The measure of how managers have been able to capture value is in the realm of value exchange. Value exchange is the merit of programs, projects and activities. Value exchange is in the form of revenue raised through sponsorship; continuation of revenue investment by the principal stakeholder, the state; time and money transacted by visitors; and intangible exchange such as leadership and reputation enhancement through collaborations. A Value Cycle Framework of New Entrant Museums is then developed as a working analytical tool to assess how the value cycle operates and how the liability of newness is detected and limited by museum management. x The Value Cycle Framework is used to assess four cases. These case studies include the National Museum of Australia as a purpose built new entrant; the Australian National Maritime Museum as a purpose built new entrant; the Mint as a recycled new entrant; and the Earth Exchange as a refurbished new entrant. Each case is assessed discretely using secondary and primary source material and analysing qualitative data generated from interviews with key stakeholders. The cases are then compared in order to track similarities and differences in relation to value creation, capture and exchange. The research findings suggest that a value cycle is operating in relation to new entrant museums. This value cycle is dynamic and non-sequential. Until value creation is floated for a range of stakeholders it is difficult for managers to know the worth of their content, location or their building. Value creation is a nominal starting point, signifying the arrival of a new entrant in the museum marketplace. However, value capture is the zone that is the most vulnerable and volatile for managers of new museums. Typically in these case studies value capture includes a disruptive episode, such as a review process that indicates the liability of newness. Managers within the museum who can respond and resolve contradictions between museological beliefs and the demands of stakeholders (and in so doing limit the liability of newness) are likely to continue museum operations. Senior executives who find such reconciliation more difficult, jeopardize the future operations of the museum to such an extent that the museums close or are transformed within the museum sector. Through these four case studies a revised Value Cycle Framework is developed as an analytical device. This analytical framework can assist in understanding the processes involved in new entry for museums.
237

The Motion Capture Pipeline

Holmboe, Dennis January 2008 (has links)
<p>Motion Capture is an essential part of a world full of digital effects in movies and games. Understanding the pipelines between software is a crucial component of this research. Methods that create the motion capture structure today are reviewed, and how they are implemented in order to create the movements that we see in modern games and movies.</p>
238

The use of ������Co cell survival curves in BNCT research

Johnson, Jennifer Elizabeth 08 June 1994 (has links)
The cell survival curve is the only means by which to both qualitatively and quantitatively assess morphologic alterations directly resulting from in vitro irradiation of the cell. A ������Co cell survival curve experiment has successfully demonstrated the response of the AtT-20 clone mammalian cell line to the effects of gamma rays. With the results of this experiment, a low LET radiation cell survival curve now exists to be used as a comparative upon the completion of BNCT cell survival curves. / Graduation date: 1995
239

Dual bayesian and morphology-based approach for markerless human motion capture in natural interaction environments

Correa Hernandez, Pedro 30 June 2006 (has links)
This work presents a novel technique for 2D human motion capture using a single non calibrated camera. The user's five extremities (head, hands and feet) are extracted, labelled and tracked after silhouette segmentation. As they are the minimal number of points that can be used in order to enable whole body gestural interaction, we will henceforth refer to these features as crucial points. The crucial point candidates are defined as the local maxima of the geodesic distance with respect to the center of gravity of the actor region which lie on the silhouette boundary. In order to disambiguate the selected crucial points into head, left and right foot, left and right hand classes, we propose a Bayesian method that combines a prior human model and the intensities of the tracked crucial points. Due to its low computational complexity, the system can run at real-time paces on standard Personal Computers, with an average error rate range between 2% and 7% in realistic situations, depending on the context and segmentation quality.
240

Using Analogy to Acquire Commonsense Knowledge from Human Contributors

Chklovski, Timothy 12 February 2003 (has links)
The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.

Page generated in 0.1334 seconds