• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5500
  • 1071
  • 768
  • 625
  • 541
  • 355
  • 143
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 82
  • Tagged with
  • 11471
  • 6028
  • 2537
  • 1977
  • 1672
  • 1419
  • 1340
  • 1313
  • 1215
  • 1132
  • 1074
  • 1035
  • 1008
  • 886
  • 876
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

A fuzzy logic approach for chatter detection and suppression in end milling

Xu, Diancheng January 2003 (has links)
In metal cutting processes, excessive vibration or chatter has an adverse effect on productivity and product surface quality. Various studies have been reported in the literature over the past few decades. However, the real application of the outcome of these studies has been very limited. A new system has been developed in this study for chatter detection and chatter suppression. The coherence function values of the frequency spectra from two accelerometers in orthogonal directions were used as a chatter indicator. The vibration energy was used to offset the over-vigilance behaviour of the coherence function. A fuzzy logic control approach was used for chatter suppression based on both the coherence function value and vibration energy level. To improve the adaptability of the fuzzy controller, a self-learning algorithm has also been developed for on-line updating the fuzzy rule base. A direct output tuning method was also proposed to improve the responsiveness of the system. The proposed system has been tested using both steel and aluminium workpieces with and without thin-walls. The experimental results show that the proposed system worked reasonably well for on-line chatter detection and suppression. The thesis also explored the possibility of using the coherence function for chatter prediction. The verification of its feasibility may be carried out in the future.
662

Personalized and artificial intelligence Web caching and prefetching

Acharjee, Utpal January 2006 (has links)
Web caching and prefetching are the most popular and widely used solutions to remedy Internet performance problems. Performance is increased if a combination of caching and prefetching systems is used rather than if these techniques are used individually. Web caching reduces the bandwidth consumption and network latency by serving the user's request from its own cache instead of the original Internet source. Prefetching is a technique that preloads and caches the web object that is not currently requested by the user but can be requested (expected) in the near future. It provides low retrieval latency for users and as well as high hit ratios. Existing methods for caching and prefetching are mostly traditional sharable Proxy cache servers. In our personalized caching and prefetching approach, the system builds up a user profile associated with a user's web behaviour by parsing the keywords from HTML pages that are browsed by the user. The keywords of a user profile are updated by adding a new keyword or incrementing its associated weight if it is already, in the profile. This user profile reflects users' web behaviour or interest. In this cache and prefetch prediction module we considered both static and dynamic users' web behaviour. We have designed and implemented an artificial intelligence multilayer neural network-based caching and prediction algorithm to personalize the Proteus Proxy server with this mechanism. Enhanced Proteus is a multilingual and internationally-supported Proxy system and can work with both mobile and traditional Proxy server-based sharable environments. In the prefetch option of Proteus, time also implemented a unique content filtering feature that blocks the downloading of unwanted web objects.
663

Identification of attribute interactions and generation of globally relevant continuous features in machine learning

Letourneau, Sylvain January 2003 (has links)
Datasets found in real world applications of machine learning are often characterized by low-level attributes with important interactions among them. Such interactions may increase the complexity of the learning task by limiting the usefulness of the attributes to dispersed regions of the representation space. In such cases, we say that the attributes are locally relevant. To obtain adequate performance with locally relevant attributes, the learning algorithm must be able to analyse the interacting attributes simultaneously and fit an appropriate model for the type of interactions observed. This is a complex task that surpasses the ability of most existing machine learning systems. This research proposes a solution to this problem by extending the initial representation with new globally relevant features. The new features make explicit the important information that was previously hidden by the initial interactions, thus reducing the complexity of the learning task. This dissertation first proposes an idealized study of the potential benefits of globally relevant features assuming perfect knowledge of the interactions among the initial attributes. This study involves synthetic data and a variety of machine learning systems. Recognizing that not all interactions produce a negative effect on performance, the dissertation introduces a novel technique named Relevance graphs to identify the interactions that negatively affect the performance of existing learning systems. The tool of interactive relevance graphs addresses another important need by providing the user with an opportunity to participate in the construction of a new representation that cancels the effects of the negative attribute interactions. The dissertation extends the concept of relevance graphs by introducing a series of algorithms for the automatic discovery of appropriate transformations. We use the named GLOREF (GLObally RElevant Features) to designate the approach that integrates these algorithms. The dissertation fully describes the GLOREF approach along with an extensive empirical evaluation with both synthetic and UCI datasets. This evaluation shows that the features produced by the GLOREF approach significantly improve the accuracy with both synthetic and real-world data.
664

Towards obstacle reconstruction through wide baseline set of images

Elias, Rimon January 2004 (has links)
In this thesis, we handle the problem of extracting 3D information from multiple images of a robotic work site in the context of teleoperation. A human operator determines the virtual path of a robotic vehicle and our mission is to provide him with the sequence of images that should be seen by the teleoperated robot moving along this path. The environment, in which the robotic vehicle moves, has a planar ground surface. In addition, a set of wide baseline images are available for the work site. This implies that a small number of points may be visible in more than two views. Moreover, camera parameters are known approximately. According to the sensor error margins, the parameters read lie within some range. Obstacles of different shapes are present in such an environment. In order to generate the sequence, this ground plane as well as the obstacles must be represented. The perspective image of the ground plane can be obtained through a homography matrix. This is done through the virtual camera parameters and the overhead view of the work site. In order to represent obstacles, we suggest different methods; these are volumetric and planar. Our algorithm to represent obstacles starts with detecting junctions. This is done through a new fast junction detection operator we propose. This operator provides the location of the junction as well as the orientations of the edges surrounding it. Junctions belonging to the obstacles are identified against those belonging to the ground plane through calculating the inter-image homography matrices. Fundamental matrices relating images can be estimated roughly through the available camera parameters. Strips surrounding epipolar lines are used as a search range for detecting possible matches. We introduce a novel homographic correlation method to be applied among candidates by reconstructing the planes of junctions in space. Two versions of homographic correlation are proposed; these are SAD and VNC. Both versions achieve matching results that outperform non-homographic correlation. The match set is then turned into a set of 3D points through triangulation. At this point, we propose a hierarchical structure to cluster points in space. This results in bounding boxes containing obstacles. A more accurate volumetric representation for the obstacle can be achieved through a voxelization approach. Another representation is suggested. That is to represent obstacles as planar patches. This is done through mapping among original and synthesized images. Finally, steps of the different algorithms presented throughout the thesis are supported by examples to show the usefulness we claim of our approaches.
665

Vision-based localization, map building and obstacle reconstruction in ground plane environments

Hajjdiab, Hassan January 2004 (has links)
The work described in this thesis develops the theory of 3D obstacle reconstruction and map building problems in the context of a robot, or a team of robots, equipped with one camera mounted on board. The study is composed of many problems representing the different phases of actions taken by the robot. This thesis first studies the problem of image matching for wide baseline images taken by moving robots. The ground plane is detected and the inter-image homography induced by the ground plane is calculated. A novel technique for ground plane matching is introduced using the overhead view transformation. The thesis then studies the simultaneous localization and map building (SLAM) problem for a team of robots collaborating in the same work site. A vision-based technique is introduced in this thesis to solve the SLAM problem. The third problem studied in this thesis is the 3D obstacle reconstruction of the obstacles lying on the ground surface. In this thesis a Geometric/Variational level set method is proposed to reconstruct the obstacles detected by the robots.
666

An investigation of the effect of the type of music upon mental test performance of high school students

Merrell, Edgar Johnston January 1943 (has links)
[No abstract submitted] / Education, Faculty of / Graduate
667

Competitive intelligence / Competitive Intelligence

Matsenko, Olga January 2009 (has links)
Competitive intelligence (CI) helps company to make right strategic decision in uncertain competitive environment. Many companies do different kinds of marketing research, but still have not adopted CI tools yet, especially in those countries where they have just started to implement instruments of free market economy. This could be related to Russian situation. The thesis is organized into three chapters. Competitive intelligence theory is explained in the first chapter. In the second chapter tools and techniques of competitive intelligence are discussed. Here the main tools are explained. Implementation of competitive intelligence tools is explained in third chapter of this thesis. Here we see developing new marketing strategy for restaurant chain by using competitive intelligence tools. 'Rosinter Restaurant Holding' is a leading casual dining chain operator in Russia. The main focus is made on 'Planet Sushi' restaurant chain in Omsk region. In this chapter we see implementation of competitive intelligence tools in marketing department while creating new strategy.
668

Dynamic Bayesian networks

Horsch, Michael C. January 1990 (has links)
Given the complexity of the domains for which we would like to use computers as reasoning engines, an automated reasoning process will often be required to perform under some state of uncertainty. Probability provides a normative theory with which uncertainty can be modelled. Without assumptions of independence from the domain, naive computations of probability are intractible. If probability theory is to be used effectively in AI applications, the independence assumptions from the domain should be represented explicitly, and used to greatest possible advantage. One such representation is a class of mathematical structures called Bayesian networks. This thesis presents a framework for dynamically constructing and evaluating Bayesian networks. In particular, this thesis investigates the issue of representing probabilistic knowledge which has been abstracted from particular individuals to which this knowledge may apply, resulting in a simple representation language. This language makes the independence assumptions for a domain explicit. A simple procedure is provided for building networks from knowledge expressed in this language. The mapping between the knowledge base and network created is precisely defined, so that the network always represents a consistent probability distribution. Finally, this thesis investigates the issue of modifying the network after some evaluation has taken place, and several techniques for correcting the state of the resulting model are derived. / Science, Faculty of / Computer Science, Department of / Graduate
669

A cerebellum-like learning machine

Klett, Robert Duncan January 1979 (has links)
This thesis derives a new learning system which is presented as both an improved cerebellar model and as a general purpose learning machine. It is based on a summary of recent publications concerning the operating characteristics and structure of the mammalian cerebellum and on standard interpolating and surface fitting techniques for functions of one and several variables. The system approximates functions as weighted sums of continuous basis functions. Learning, which takes place in an iterative manner, is accomplished by presenting the system with arbitrary training points (function input variables) and associated function values. The system is shown to be capable of minimizing the estimation error in the mean-square-error sense. The system is also shown to minimize the expectation of the interference, which results from learning at a single point, on all other points in the input space. In this sense, the system maximizes the rate at which arbitrary functions are learned. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Unknown
670

Using competitive intelligence in determining potential competitor strategies

Du Bruyn, Heyns 30 November 2011 (has links)
M.Comm. / It is critical for companies in today's competitive business environment to understand the impact and influence competitors and the external environment have on the success of their strategies and competitive advantage. Business must therefore comprehend how the strategies of competitors and forces of the external environment may affect the competitive advantage of the business. Businesses require actionable intelligence to enable them to monitor, analyse and determine the impact from external environmental forces and actions from competitors. Businesses have to develop appropriate strategies to achieve competitive advantage over competitors in their industry. The question which this study addresses, is whether businesses are able to monitor the strategies and influences from the external environment effectively. This is needed to gain a competitive advantage, and is accomplished by producing actionable intelligence utilising the competitive intelligence cycle. The purpose of the study is to determine how a business can utilise competitive intelligence in order to determine the potential strategies of competitors. To achieve these objectives, a literature study was completed on the subject matter. This study has established that the competitive intelligence function consists of tour distinct phases. Phase one determines the intelligence requirements of the end users of the intelligence. Phase two involves the collection of information. Phase three involves the analysis of the information in order to produce intelligence. Phase four disseminates the intelligence to the end users (those who requested the intelligence). Each of the four phases of the competitive intelligence cycle have been examined and discussed. Special reference and attention has been paid to the analytical techniques and tools - phase three - that id used to produce actionable intelligence.

Page generated in 0.119 seconds