• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 40
  • 30
  • 9
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 438
  • 438
  • 181
  • 144
  • 139
  • 122
  • 91
  • 70
  • 61
  • 58
  • 44
  • 41
  • 39
  • 38
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Improved Opportunity Cost Algorithm for Carrier Selection in Combinatorial Auctions

Uma Gnanasekaran, Viswanath 04 June 2004 (has links)
Transportation costs constitute up to thirty percent of the total costs involved in a supply chain. Outsourcing the transportation service requirements to third party logistics providers have been widely adopted, as they are economically more rational than owning and operating a service. Transportation service procurement has been traditionally done through an auctioning process where the auctioneer (shipper) auctions lanes (distinct delivery routes) to bidders (carriers). Individual lanes were being auctioned separately disallowing the carriers to express complements and substitutes. Using combinatorial auctions mechanism to auction all available lanes together would allow the carriers to take advantage of the lane bundles, their existing service schedule, probability of securing other lanes and available capacity to offer services at lower rates and be more competitive. The winners of the auction are the set of non-overlapping bids that minimize the cost for the shippers. The winner determination problem to be solved in determining the optimal allocation of the services in such kind of combinatorial auctions is a NP-hard problem. Many heuristics like approximate linear programming, stochastic local search have proposed to find an approximate solution to the problem in a reasonable amount of time. Akcoglu et al [22] developed the opportunity cost algorithm using the local ratio technique to compute a greedy solution to the problem. A recalculation modification to the opportunity cost algorithm has been formulated where opportunity costs are recalculated every time for the set of remaining bids after eliminating the bid chosen to be a part of the winning solution and its conflicts have eliminated. Another method that formulates the winning solution based on the maximum total revenue values calculated for each bid using the opportunity cost algorithm has also been researched.
42

Optimal Test Case Selection for Multi-Component Software System

Kysetti, Praveen Babu 14 October 2004 (has links)
The omnipresence of software has forced upon the industry to produce efficient software in a short time. These requirements can be met by code reusability and software testing. Code reusability is achieved by developing software as components/modules rather than a single block. Software coding teams are becoming large to satiate the need of massive requirements. Large teams could work easily if software is developed in a modular fashion. It would be pointless to have software that would crash often. Testing makes the software more reliable. Modularity and reliability is the need of the day. Testing is usually carried out using test cases that target a class of software faults or a specific module. Usage of different test cases has an idiosyncratic effect on the reliability of the software system. Proposed research develops a model to determine the optimal test case policy selection that considers a modular software system with specific test cases in a stipulated testing time. The proposed model, models the failure behavior of each component using a conditional NHPP (Non-homogeneous Poisson process) and the interactions of the components by a CTMC (continuous time Markov chain). The initial number of bugs and the bug detection rate are known distributions. Dynamic programming is used as a tool in determining the optimal test case policy. The complete model is simulated using Matlab. The Markov decision process is computationally intensive but the implementation of the algorithm is meticulously optimized to eliminate repeat calculations. This has saved roughly 25-40% in processing time for different variations of the problem.
43

Probabilistic Risk Assessment Method for Prioritization of Risk Factors

Shah, Jay Tarakkumar 10 November 2004 (has links)
Risk management involves assessing the risk sources and designing strategies and procedures to mitigate those risks to an acceptable level. Measurement of risk factors plays an important role in the assessment of risk. This research proposes to develop risk assessment frameworks and mathematical model (Probabilistic Risk Assessment model) identify the risk factors. Quantification and prioritization of risk factors will help to design controls, resource allocation policies and minimize the total cost using the Cost Minimization model. The proposed models are applied to a complex system that is representative of actual business situations.
44

Cognitive Memory Effects on Non-Linear Video-Based Learning

Comeaux, Katherine Renee 19 January 2005 (has links)
During an informative learning process, information, material, facts and ideas are typically conveyed in a linear arrangement. Individuals are frequently distracted during this process with their attention being diverted to an interruption (Internet, phone call, etc). When presented with any new information, the mind evolves through problem solving and evaluation procedures. The way in which that information is processed and perceived depends on: (a) original presentation (b) examination of material and (c) an individualistic measurement of success. However, when faced with an interruption, the person is forced to deal with non-linear arrangement of information. This research investigates nonlinear presentation or seeking of material and the effects in optimizing memory retention. This study (1) analyzed the cognitive consequences of non-linear forms of information paths in comparison to standard/linear paths (2) investigated the user's knowledge acquisition and control through non-linear paths during navigation while being interrupted; and, (3) determine how this non-linear presentation of instructions effect the overall learning experience. The research specifically focused on the performance levels under one of four conditions (procedural/segmented, procedural/non-segmented, non-procedural/segmented, or non-procedural/non-segmented) while interacting with a distributed web-based learning environment. The population of this study included 62 college students taking a 20 minute web-based session. Each student completed a background questionnaire, video assessment questionnaire, working memory test, work load test, a comprehension test and a learning style test. The workload test given was the NASA-TLX which examines the "workload" experienced during the web-based session. The learning styles test was the Group Embedded Figures Test (GEFT), which classified participants as either field independent or dependent. There was no significance in user performance levels between procedural / non-procedural tasks and segmented / non-segmented video types (p=0.1224). However, when comparing the means for each task type and technology type that procedural / segmented seemed to perform much higher than that of the other groups. There was marginal significance for performance level depending on individual learning styles (p=0.0838).
45

Semantic Classification of Rural and Urban Images Using Learning Vector Quantization

Thulasiraman, Prakash 24 January 2005 (has links)
One of the major hurdles in semantic image classification is that only low-level features can be reliably extracted from images as opposed to higher level features (objects present in the scene and their inter-relationships). The main challenge lies in grouping images into semantically meaningful categories based on the available low-level visual features of the images. It is important that we have a classification method that will handle a complex image dataset with not so well defined boundaries between clusters. Learning Vector Quantization (LVQ) neural networks offer a great deal of robustness in clustering complex datasets. This study presents a semantic image classification using LVQ neural network that uses low level texture, shape, and color features that are extracted from images from rural and urban domains using the Box Counting Dimension method (Peitgen et al. 1992), Fast Fourier Transformation and HSV color space. The performance measures precision and recall were calculated while using various ranges of input parameters such as learning rate, iterations, number of hidden neurons for the LVQ network. The study also tested for the feature robustness for image object orientation (rotation and position) and image size. Our method was compared against the method given in Prabhakar et al, 2002. The precision and recall while using various combination of texture, shape, and color features for our method was between .68 and .88, and 0.64 and .90 respectively compared against the precision and recall (for our image data set) of 0.59 and .62 for the method given by Prabhakar et al., 2002.
46

A Study of Distributed Clustering of Vector Time Series on the Grid by Task Farming

Nayar, Arun B 25 January 2005 (has links)
Traditional data mining methods were limited by availability of computing resources like network bandwidth, storage space and processing power. These algorithms were developed to work around this problem by looking at a small cross-section of the whole data available. However since a major chunk of the data is kept out, the predictions were generally inaccurate and missed out on significant features that was part of the data. Today with resources growing at almost the same pace as data, it is possible to rethink mining algorithms to work on distributed resources and essentially distributed data. Distributed data mining thus holds great promise. Using grid technologies, data mining can be extended to areas which were not previously looked at because of the volume of data being generated, like climate modeling, web usage, etc. An important characteristic of data today is that it is highly decentralized and mostly redundant. Data mining algorithms which can make efficient use of distributed data has to be thought of. Though it is possible to bring all the data together and run traditional algorithms, this has a high overhead, in terms of bandwidth usage for transmission, preprocessing steps which have to be to handle every format the received data. By processing the data locally, the preprocessing stage can be made less bulky and also the traditional data mining techniques would be able to work on the data efficiently. The focus of this project is to use an existing data mining technique, fuzzy c-means clustering to work on distributed data in a simulated grid environment and to review the performance of this approach viz., the traditional approach.
47

Effects of Pressure Suit and Race on Functional Reach, Static and Dynamic Strength

Uppu, Nageswara Rao 12 November 2004 (has links)
In the design of any manual workspace, it is important for the designers to have access to data that can illustrate reach capabilities under real-time work situation. Wearing bulky clothing (pressure suit) and protective restraints (seat or shoulder harness belts) is often mandatory in high acceleration work environments. Clothing and personal equipment worn can influence the functional reach and strength values since they add to the body size. The present study was conducted to investigate the effect of wearing a VKK-6M pressure suit on functional reach limitations and strength values. The technology of incorporating body dimensions into cockpit design primarily evolved in western countries and therefore the only datasets available is of Caucasians. When designing equipment for populations other than westerners, western anthropometric data is inappropriate. In this thesis a representative sample of Caucasian and Asian Indian population are chosen and their reach envelopes are compared. Subjects reach and strength data are collected with and without-suit and analyzed to see the effect of pressure suit on reach and strength. The study concludes that wearing pressure suit reduces the average reach significantly (at alpha = 0.05). The 5th percentile Asian Indian and Caucasian reach envelopes are derived for placement of critical cockpit controls. Race-reach study showed a significant difference in shoulder breadth of Caucasians and Asian Indians (at alpha = 0.05), but no apparent relationship between bideltoid breadth and thumb tip reach was found. The study on significance of wearing pressure suit on strengths (at alpha = 0.05) concluded, suit does not affect static or dynamic strength.
48

Bezier Curves for Metamodeling of Simulation Output

Kingre, Harish J 17 November 2004 (has links)
Many design optimization problems rely on simulation models to obtain feasible solutions. Even with substantial improvement in the computational capability of computers, the enormous cost of computation needed for simulation makes it impractical to rely on simulation models. The use of metamodels or surrogate approximations in place of actual simulation models makes analysis realistic by reducing computational burden. There are many popular metamodeling techniques such as Polynomial Regression, Multivariate Adaptive Regression Splines, Radial Basis Functions, Kriging and Artificial Neural Networks. This research proposes a new metamodeling technique that uses Bezier curves and patches. The Bezier curve method is based on interpolation like Kriging and Radial Basis Functions. In this research the Bezier Curve method will be used for output modeling of univariate and bivariate output modeling. Results will be validated using comparison with some of the most popular metamodeling techniques.
49

Usability of Home Cholesterol Test Kits and Their Impact on Patients' Decision

Surabattula, Deepti 01 December 2004 (has links)
Release of home health testing kits into the market has enabled people to take care of their own health. Misinterpretation of results and delays in treatments are the major concerns of the doctors. In the present study, two cholesterol test kits, AccuchekÒ Instant plusÒ and Home Access® Instant Cholesterol Test, were compared on the basis of user performance, accuracy, and the patients future medical decisions based on the test results. The study was conducted with 30 participants, 15 men and 15 women. Participants tested their overall cholesterol level with both kits. In addition, a clinical cholesterol evaluation, the medical gold standard, was performed. The usability of both test kits was evaluated through questionnaires, user task performance, and comparison with the clinical evaluation. Participants were questioned on how they would use the information once they had seen the result from the first test kit. Results of the study found that regardless of the kit used, participants always found the first kit used as the more usable kit. When participants were asked to provide a decision on future health care, a predominate number of participants said they would change their lifestyle rather than visit a doctor regardless of their cholesterol level. This finding highlights physicians concerns that patients may delay treatment for potentially serious conditions even when they have the available results.
50

Hierarchical Indexing for Region Based Image Retrieval

Aulia, Eka 06 April 2005 (has links)
Region-based image retrieval system has been an active research area. In this study we developed an improved region-based image retrieval system. The system applies image segmentation to divide an image into discrete regions, which if the segmentation is ideal, correspond to objects. The focus of this research is to improve the capture of regions so as to enhance indexing and retrieval performance and also to provide a better similarity distance computation. During image segmentation, we developed a modified k-means clustering algorithm for image retrieval where hierarchical clustering algorithm is used to generate the initial number of clusters and the cluster centers. In addition, to during similarity distance computation we introduced object weight based on object's uniqueness. Therefore, objects that are not unique such as trees and skies will have less weight. The experimental evaluation is based on the same 1000 COREL color image database with the FuzzyClub, IRM and Geometric Histogram and the performance is compared between them. As compared with existing technique and systems, such as IRM, FuzzyClub, and Geometric Histogram, our study demonstrate the following unique advantages: (i) an improvement in image segmentation accuracy using the modified k-means algorithm (ii)an improvement in retrieval accuracy as a result of a better similarity distance computation that considers the importance and uniqueness of objects in an image.

Page generated in 0.0927 seconds