• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 871
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1722
  • 408
  • 310
  • 243
  • 224
  • 183
  • 173
  • 166
  • 166
  • 156
  • 153
  • 152
  • 152
  • 150
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Very low bit rate video coding using geometric transform motion compensation

De Faria, Sergio Manuel Maciel January 1996 (has links)
No description available.
142

Adaptive sensor array processing in non-stationary signal environments

Hayward, Stephen David January 1999 (has links)
No description available.
143

Kill slurry design for perforated completions

Han, Liqun January 1993 (has links)
No description available.
144

Nonlinear estimation techniques for target tracking

McGinnity, Shaun Joseph January 1998 (has links)
No description available.
145

Management of Uncertainties in Publish/Subscribe System

Liu, Haifeng 18 February 2010 (has links)
In the publish/subscribe paradigm, information providers disseminate publications to all consumers who have expressed interest by registering subscriptions. This paradigm has found wide-spread applications, ranging from selective information dissemination to network management. However, all existing publish/subscribe systems cannot capture uncertainty inherent to the information in either subscriptions or publications. In many situations the large number of data sources exhibit various kinds of uncertainties. Examples of imprecision include: exact knowledge to either specify subscriptions or publications is not available; the match between a subscription and a publication with uncertain data is approximate; the constraints used to define a match is not only content based, but also take the semantic information into consideration. All these kinds of uncertainties have not received much attention in the context of publish/subscribe systems. In this thesis, we propose new publish/subscribe models to express uncertainties and semantics in publications and subscriptions, along with the matching semantics for each model. We also develop efficient algorithms to perform filtering for our models so that it can be applied to process the rapidly increasing information on the Internet. A thorough experimental evaluation is presented to demonstrate that the proposed systems can offer scalability to large number of subscribers and high publishing rates.
146

Personal Email Spam Filtering with Minimal User Interaction

Mojdeh, Mona January 2012 (has links)
This thesis investigates ways to reduce or eliminate the necessity of user input to learning-based personal email spam filters. Personal spam filters have been shown in previous studies to yield superior effectiveness, at the cost of requiring extensive user training which may be burdensome or impossible. This work describes new approaches to solve the problem of building a personal spam filter that requires minimal user feedback. An initial study investigates how well a personal filter can learn from different sources of data, as opposed to user’s messages. Our initial studies show that inter-user training yields substantially inferior results to intra-user training using the best known methods. Moreover, contrary to previous literature, it is found that transfer learning degrades the performance of spam filters when the source of training and test sets belong to two different users or different times. We also adapt and modify a graph-based semi-supervising learning algorithm to build a filter that can classify an entire inbox trained on twenty or fewer user judgments. Our experiments show that this approach compares well with previous techniques when trained on as few as two training examples. We also present the toolkit we developed to perform privacy-preserving user studies on spam filters. This toolkit allows researchers to evaluate any spam filter that conforms to a standard interface defined by TREC, on real users’ email boxes. Researchers have access only to the TREC-style result file, and not to any content of a user’s email stream. To eliminate the necessity of feedback from the user, we build a personal autonomous filter that learns exclusively on the result of a global spam filter. Our laboratory experiments show that learning filters with no user input can substantially improve the results of open-source and industry-leading commercial filters that employ no user-specific training. We use our toolkit to validate the performance of the autonomous filter in a user study.
147

Belief Revision for Adaptive Information Agents

Lau, Raymond Yiu Keung January 2003 (has links)
As the richness and diversity of information available to us in our everyday lives has expanded, so the need to manage this information grows. The lack of effective information management tools has given rise to what is colloquially known as the information overload problem. Intelligent agent technologies have been explored to develop personalised tools for autonomous information retrieval (IR). However, these so-called adaptive information agents are still primitive in terms of their learning autonomy, inference power, and explanatory capabilities. For instance, users often need to provide large amounts of direct relevance feedback to train the agents before these agents can acquire the users' specific information requirements. Existing information agents are also weak in dealing with the serendipity issue in IR because they cannot infer document relevance with respect to the possibly related IR contexts. This thesis exploits the theories and technologies from the fields of Information Retrieval (IR), Symbolic Artificial Intelligence and Intelligent Agents for the development of the next generation of adaptive information agents to alleviate the problem of information overload. In particular, the fundamental issues such as representation, learning, and classjfication (e.g., classifying documents as relevant or not) pertaining to these agents are examined. The design of the adaptive information agent model stems from a basic intuition in IR. By way of illustration, given the retrieval context involving a science student, and a query "Java", what information items should an intelligent information agent recommend to its user? The agent should recommend documents about "Computer Programming" if it believes that its user is a computer science student and every computer science student needs to learn programming. However, if the agent later discovers that its user is studying "volcanology", and the agent also believes that volcanists are interested in the volcanos in Java, the agent may recommend documents about "Merapi" (a volcano in Java with a recent eruption in 1994). This scenario illustrates that a retrieval context is not only about a set of terms and their frequencies but also the relationships among terms (e.g., java Λ science → computer, computer → programming, java Λ science Λ volcanology → merapi, etc.) In addition, retrieval contexts represented in information agents should be revised in accordance with the changing information requirements of the users. Therefore, to enhance the adaptive and proactive IR behaviour of information agents, an expressive representation language is needed to represent complex retrieval contexts and an effective learning mechanism is required to revise the agents' beliefs about the changing retrieval contexts. Moreover, a sound reasoning mechanism is essential for information agents to infer document relevance with respect to some retrieval contexts to enhance their proactiveness and learning autonomy. The theory of belief revision advocated by Alchourrón, Gärdenfors, and Makinson (AGM) provides a rigorous formal foundation to model evolving retrieval contexts in terms of changing epistemic states in adaptive information agents. The expressive power of the AGM framework allows sufficient details of retrieval contexts to be captured. Moreover, the AGM framework enforces the principles of minimal and consistent belief changes. These principles coincide with the requirements of modelling changing information retrieval contexts. The AGM belief revision logic has a close connection with the Logical Uncertainty Principle which describes the fundamental approach for logic-based IR models. Accordingly, the AGM belief functions are applied to develop the learning components of adaptive information agents. Expectationinference which is characterised by axioms leading to conservatively monotonic IR behaviour plays a significant role in developing the agents' classification components. Because of the direct connection between the AGM belief functions and the expectation inference relations, seamless integration of the information agents' learning and classification components is made possible. Essentially, the learning functions and the classification functions of adaptive information agents are conceptualised by and q d respectively. This conceptualisation can be interpreted as: (1) learning is the process of revising the representation K of a retrieval context with respect to a user's relevance feedback q which can be seen as a refined query; (2) classification is the process of determining the degree of relevance of a document d with respect to the refined query q given the agent's expectation (i.e., beliefs) K about the retrieval context. At the computational level, how to induce epistemic entrenchment which defines the AGM belief functions, and how to implement the AGM belief functions by means of an effective and efficient computational algorithm are among the core research issues addressed. Automated methods of discovering context sensitive term associations such as (computer → programming) and preclusion relations such as (volcanology ⁄→ programming) are explored. In addition, an effective classification method which is underpinned by expectation inference is developed for adaptive information agents. Last but not least, quantitative evaluations, which are based on well-known IR bench-marking processes, are applied to examine the performance of the prototype agent system. The performance of the belief revision based information agent system is compared with that of a vector space based agent system and other adaptive information filtering systems participated in TREC-7. As a whole, encouraging results are obtained from our initial experiments.
148

Single channel speech enhancement based on perceptual temporal masking model

Wang , Yao, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
In most speech communication systems, the presence of background noise causes the quality and intelligibility of speech to degrade, especially when the Signal-to-Noise Ratio (SNR) is low. Numerous speech enhancement techniques have been employed successfully in many applications. However, at low signal-to-noise ratios most of these speech enhancement techniques tend to introduce a perceptually annoying residual noise known as "musical noise". The research presented in this thesis aims to minimize this musical noise and maximize the noise reduction ability of speech enhancement algorithms to improve speech quality in low SNR environments. This thesis proposes two novel speech enhancement algorithms based on Weiner and Kalman filters, and exploit the masking properties of the human auditory system to reduce background noise. The perceptual Wiener filter method uses either temporal or simultaneous masking to adjust the Wiener gain in order to suppress noise below the masking thresholds. The second algorithm involves reshaping the corrupted signal according to the masking threshold in each critical band, followed by Kalman filtering. A comparison of the results from these proposed techniques with those obtained from traditional methods suggests that the proposed algorithms address the problem of noise reduction effectively while decreasing the level of the musical noise. In this thesis, many other existing competitive noise suppression methods have also been discussed and their performance evaluated under different types of noise environments. The performances were evaluated and compared to each other using both objective PESQ measures (ITU-T P.862) and subjective listening tests (ITU-T P.835). The proposed speech enhancement schemes based on the auditory masking model outperformed the other methods that were tested.
149

Robust object tracking using the particle filtering and level set methods

Luo, Cheng, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
Robust object tracking plays a central role in many applications of image processing, computer vision and automatic control. In this thesis, robust object tracking under complex environments, including heavy clutters in the background, low resolution of the image sequences and non-stationary camera, has been studied. The interest of this study stems from the improvement of the performance of visual tracking using particle filtering. A Geometric Active contour-based Tracking Estimator, namely GATE, has been developed in order to tackle the problems in robust object tracking where the existence of multiple features or good object detection is not guaranteed. GATE combines particle filtering and the level set-based active contour method. The particle filtering method is able to deal with nonlinear and non-Gaussian recursive estimation problems, and the level set-based active contour method is capable of classifying state space of particle filtering under the methodology of one class classification. By integrating this classifier into the particle filtering, geometric information introduced by the shape prior and pose invariance of the tracked object in the level set-based active contour method can be utilised to prevent the particles corresponding to outlier measurements from being heavily reweighted. Hence, this procedure reshapes and refines the posterior distribution of the particle filtering. To verify the performance of GATE, the performance of the standard particle filter is compared with that of GATE. Since video sequences in different applications are usually captured by diverse devices, GATE and the standard particle filters with the identical initialisation are studied on image sequences captured by the handhold, stationary and PTZ camera, respectively. According to experimental results, even though a simple color observation model based on the Hue-Saturation-Value (HSV) color histogram is adopted, the newly developed. GATE significantly improves the performance of the particle filtering for object tracking in complex environments. Meanwhile, GATE initialises a novel approach to tackle the impoverishment problem for recursive Bayesian estimation using sampling method.
150

Advanced navigation algorithms for precision landing

Zanetti, Renato, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.

Page generated in 0.0939 seconds