• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2028
  • 247
  • 99
  • 74
  • 49
  • 18
  • 16
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 2938
  • 2938
  • 507
  • 484
  • 483
  • 482
  • 452
  • 401
  • 344
  • 332
  • 218
  • 208
  • 183
  • 177
  • 176
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

A heuristic algorithm to generate test program sequences for moving probe electronic test equipment

Arteta, Bertha M. 01 October 2001 (has links)
The electronics industry, is experiencing two trends one of which is the drive towards miniaturization of electronic products. The in-circuit testing predominantly used for continuity testing of printed circuit boards (PCB) can no longer meet the demands of smaller size circuits. This has lead to the development of moving probe testing equipment. Moving Probe Test opens up the opportunity to test PCBs where the test points are on a small pitch (distance between points). However, since the test uses probes that move sequentially to perform the test, the total test time is much greater than traditional in-circuit test. While significant effort has concentrated on the equipment design and development, little work has examined algorithms for efficient test sequencing. The test sequence has the greatest impact on total test time, which will determine the production cycle time of the product. Minimizing total test time is a NP-hard problem similar to the traveling salesman problem, except with two traveling salesmen that must coordinate their movements. The main goal of this thesis was to develop a heuristic algorithm to minimize the Flying Probe test time and evaluate the algorithm against a "Nearest Neighbor" algorithm. The algorithm was implemented with Visual Basic and MS Access database. The algorithm was evaluated with actual PCB test data taken from Industry. A statistical analysis with 95% C.C. was performed to test the hypothesis that the proposed algorithm finds a sequence which has a total test time less than the total test time found by the "Nearest Neighbor" approach. Findings demonstrated that the proposed heuristic algorithm reduces the total test time of the test and, therefore, production cycle time can be reduced through proper sequencing.
142

The impact of web site design and privacy practices on trust

Gandarillas, Carlos 23 July 2002 (has links)
The most significant issue facing the growth of eCommerce is trust. This research was conducted to determine the type of information users are willing to allow entities to collect online, such as a user's email addresses and click-stream behavior, and its affect on trust. This study determined empirically that participants were more willing to submit non-personally identifying data (e.g., clickstream data) over personally identifying information (e.g., email address); participants were wary of submitting any personal information such as an email address; when a participant submits an email address, it may not be his or her primary email address; the opting defaults for solicitations did not affect trust; participants did not read the privacy policy; and that these findings applied to all web sites, regardless of whether they were shopping/commerce, community, download, or informational. Based on the results, several design guidelines were developed to aid web site designers in creating trusted sites.
143

The Application of Bayesian Networks in System Reliability

January 2014 (has links)
abstract: In this paper, a literature review is presented on the application of Bayesian networks applied in system reliability analysis. It is shown that Bayesian networks have become a popular modeling framework for system reliability analysis due to the benefits that Bayesian networks have the capability and flexibility to model complex systems, update the probability according to evidences and give a straightforward and compact graphical representation. Research on approaches for Bayesian network learning and inference are summarized. Two groups of models with multistate nodes were developed for scenarios from constant to continuous time to apply and contrast Bayesian networks with classical fault tree method. The expanded model discretized the continuous variables and provided failure related probability distribution over time. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2014
144

Centralized and Decentralized Methods of Efficient Resource Allocation in Cloud Computing

January 2016 (has links)
abstract: Resource allocation in cloud computing determines the allocation of computer and network resources of service providers to service requests of cloud users for meeting the cloud users' service requirements. The efficient and effective resource allocation determines the success of cloud computing. However, it is challenging to satisfy objectives of all service providers and all cloud users in an unpredictable environment with dynamic workload, large shared resources and complex policies to manage them. Many studies propose to use centralized algorithms for achieving optimal solutions for resource allocation. However, the centralized algorithms may encounter the scalability problem to handle a large number of service requests in a realistically satisfactory time. Hence, this dissertation presents two studies. One study develops and tests heuristics of centralized resource allocation to produce near-optimal solutions in a scalable manner. Another study looks into decentralized methods of performing resource allocation. The first part of this dissertation defines the resource allocation problem as a centralized optimization problem in Mixed Integer Programming (MIP) and obtains the optimal solutions for various resource-service problem scenarios. Based on the analysis of the optimal solutions, various heuristics are designed for efficient resource allocation. Extended experiments are conducted with larger numbers of user requests and service providers for performance evaluation of the resource allocation heuristics. Experimental results of the resource allocation heuristics show the comparable performance of the heuristics to the optimal solutions from solving the optimization problem. Moreover, the resource allocation heuristics demonstrate better computational efficiency and thus scalability than solving the optimization problem. The second part of this dissertation looks into elements of service provider-user coordination first in the formulation of the centralized resource allocation problem in MIP and then in the formulation of the optimization problem in a decentralized manner for various problem cases. By examining differences between the centralized, optimal solutions and the decentralized solutions for those problem cases, the analysis of how the decentralized service provider-user coordination breaks down the optimal solutions is performed. Based on the analysis, strategies of decentralized service provider-user coordination are developed. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2016
145

Structure-Regularized Partition-Regression Models for Nonlinear System-Environment Interactions

January 2018 (has links)
abstract: Under different environmental conditions, the relationship between the design and operational variables of a system and the system’s performance is likely to vary and is difficult to be described by a single model. The environmental variables (e.g., temperature, humidity) are not controllable while the variables of the system (e.g. heating, cooling) are mostly controllable. This phenomenon has been widely seen in the areas of building energy management, mobile communication networks, and wind energy. To account for the complicated interaction between a system and the multivariate environment under which it operates, a Sparse Partitioned-Regression (SPR) model is proposed, which automatically searches for a partition of the environmental variables and fits a sparse regression within each subdivision of the partition. SPR is an innovative approach that integrates recursive partitioning and high-dimensional regression model fitting within a single framework. Moreover, theoretical studies of SPR are explicitly conducted to derive the oracle inequalities for the SPR estimators which could provide a bound for the difference between the risk of SPR estimators and Bayes’ risk. These theoretical studies show that the performance of SPR estimator is almost (up to numerical constants) as good as of an ideal estimator that can be theoretically achieved but is not available in practice. Finally, a Tree-Based Structure-Regularized Regression (TBSR) approach is proposed by considering the fact that the model performance can be improved by a joint estimation on different subdivisions in certain scenarios. It leverages the idea that models for different subdivisions may share some similarities and can borrow strength from each other. The proposed approaches are applied to two real datasets in the domain of building energy. (1) SPR is used in an application of adopting building design and operational variables, outdoor environmental variables, and their interactions to predict energy consumption based on the Department of Energy’s EnergyPlus data sets. SPR produces a high level of prediction accuracy and provides insights into the design, operation, and management of energy-efficient buildings. (2) TBSR is used in an application of predicting future temperature condition which could help to decide whether to activate or not the Heating, Ventilation, and Air Conditioning (HVAC) systems in an energy-efficient manner. / Dissertation/Thesis / Doctoral Dissertation Industrial Engineering 2018
146

Botnet Detection Using Graph Based Feature Clustering

Akula, Ravi Kiran 19 May 2018 (has links)
<p> Detecting botnets in a network is crucial because bot-activities impact numerous areas such as security, finance, health care, and law enforcement. Most existing rule and flow-based detection methods may not be capable of detecting bot-activities in an efficient manner. Hence, designing a robust botnet-detection method is of high significance. In this study, we propose a botnet-detection methodology based on graph-based features. Self-Organizing Map is applied to establish the clusters of nodes in the network based on these features. Our method is capable of isolating bots in small clusters while containing most normal nodes in the big-clusters. A filtering procedure is also developed to further enhance the algorithm efficiency by removing inactive nodes from bot detection. The methodology is verified using real-world CTU-13 and ISCX botnet datasets and benchmarked against classification-based detection methods. The results show that our proposed method can efficiently detect the bots despite their varying behaviors.</p><p>
147

Performance Analysis of a Double Crane with Finite Interoperational Buffer Capacity with Multiple Fidelity Simulations

January 2018 (has links)
abstract: With trends of globalization on rise, predominant of the trades happen by sea, and experts have predicted an increase in trade volumes over the next few years. With increasing trade volumes, container ships’ upsizing is being carried out to meet the demand. But the problem with container ships’ upsizing is that the sea port terminals must be equipped adequately to improve the turnaround time otherwise the container ships’ upsizing would not yield the anticipated benefits. This thesis focus on a special type of a double automated crane set-up, with a finite interoperational buffer capacity. The buffer is placed in between the cranes, and the idea behind this research is to analyze the performance of the crane operations when this technology is adopted. This thesis proposes the approximation of this complex system, thereby addressing the computational time issue and allowing to efficiently analyze the performance of the system. The approach to model this system has been carried out in two phases. The first phase consists of the development of discrete event simulation model to make the system evolve over time. The challenges of this model are its high processing time which consists of performing large number of experimental runs, thus laying the foundation for the development of the analytical model of the system, and with respect to analytical modeling, a continuous time markov process approach has been adopted. Further, to improve the efficiency of the analytical model, a state aggregation approach is proposed. Thus, this thesis would give an insight on the outcomes of the two approaches and the behavior of the error space, and the performance of the models for the varying buffer capacities would reflect the scope of improvement in these kinds of operational set up. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2018
148

Integrated Supply Chain Network Design: Location, Transportation, Routing and Inventory Decisions

January 2013 (has links)
abstract: In this dissertation, an innovative framework for designing a multi-product integrated supply chain network is proposed. Multiple products are shipped from production facilities to retailers through a network of Distribution Centers (DCs). Each retailer has an independent, random demand for multiple products. The particular problem considered in this study also involves mixed-product transshipments between DCs with multiple truck size selection and routing delivery to retailers. Optimally solving such an integrated problem is in general not easy due to its combinatorial nature, especially when transshipments and routing are involved. In order to find out a good solution effectively, a two-phase solution methodology is derived: Phase I solves an integer programming model which includes all the constraints in the original model except that the routings are simplified to direct shipments by using estimated routing cost parameters. Then Phase II model solves the lower level inventory routing problem for each opened DC and its assigned retailers. The accuracy of the estimated routing cost and the effectiveness of the two-phase solution methodology are evaluated, the computational performance is found to be promising. The problem is able to be heuristically solved within a reasonable time frame for a broad range of problem sizes (one hour for the instance of 200 retailers). In addition, a model is generated for a similar network design problem considering direct shipment and consolidation within the same product set opportunities. A genetic algorithm and a specific problem heuristic are designed, tested and compared on several realistic scenarios. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013
149

Locating Counting Sensors in Traffic Network to Estimate Origin-Destination Volumes

January 2013 (has links)
abstract: Improving the quality of Origin-Destination (OD) demand estimates increases the effectiveness of design, evaluation and implementation of traffic planning and management systems. The associated bilevel Sensor Location Flow-Estimation problem considers two important research questions: (1) how to compute the best estimates of the flows of interest by using anticipated data from given candidate sensors location; and (2) how to decide on the optimum subset of links where sensors should be located. In this dissertation, a decision framework is developed to optimally locate and obtain high quality OD volume estimates in vehicular traffic networks. The framework includes a traffic assignment model to load the OD traffic volumes on routes in a known choice set, a sensor location model to decide on which subset of links to locate counting sensors to observe traffic volumes, and an estimation model to obtain best estimates of OD or route flow volumes. The dissertation first addresses the deterministic route flow estimation problem given apriori knowledge of route flows and their uncertainties. Two procedures are developed to locate "perfect" and "noisy" sensors respectively. Next, it addresses a stochastic route flow estimation problem. A hierarchical linear Bayesian model is developed, where the real route flows are assumed to be generated from a Multivariate Normal distribution with two parameters: "mean" and "variance-covariance matrix". The prior knowledge for the "mean" parameter is described by a probability distribution. When assuming the "variance-covariance matrix" parameter is known, a Bayesian A-optimal design is developed. When the "variance-covariance matrix" parameter is unknown, Markov Chain Monte Carlo approach is used to estimate the aposteriori quantities. In all the sensor location model the objective is the maximization of the reduction in the variances of the distribution of the estimates of the OD volume. Developed models are compared with other available models in the literature. The comparison showed that the models developed performed better than available models. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013
150

Non-Linear Variation Patterns and Kernel Preimages

January 2013 (has links)
abstract: Identifying important variation patterns is a key step to identifying root causes of process variability. This gives rise to a number of challenges. First, the variation patterns might be non-linear in the measured variables, while the existing research literature has focused on linear relationships. Second, it is important to remove noise from the dataset in order to visualize the true nature of the underlying patterns. Third, in addition to visualizing the pattern (preimage), it is also essential to understand the relevant features that define the process variation pattern. This dissertation considers these variation challenges. A base kernel principal component analysis (KPCA) algorithm transforms the measurements to a high-dimensional feature space where non-linear patterns in the original measurement can be handled through linear methods. However, the principal component subspace in feature space might not be well estimated (especially from noisy training data). An ensemble procedure is constructed where the final preimage is estimated as the average from bagged samples drawn from the original dataset to attenuate noise in kernel subspace estimation. This improves the robustness of any base KPCA algorithm. In a second method, successive iterations of denoising a convex combination of the training data and the corresponding denoised preimage are used to produce a more accurate estimate of the actual denoised preimage for noisy training data. The number of primary eigenvectors chosen in each iteration is also decreased at a constant rate. An efficient stopping rule criterion is used to reduce the number of iterations. A feature selection procedure for KPCA is constructed to find the set of relevant features from noisy training data. Data points are projected onto sparse random vectors. Pairs of such projections are then matched, and the differences in variation patterns within pairs are used to identify the relevant features. This approach provides robustness to irrelevant features by calculating the final variation pattern from an ensemble of feature subsets. Experiments are conducted using several simulated as well as real-life data sets. The proposed methods show significant improvement over the competitive methods. / Dissertation/Thesis / Ph.D. Industrial Engineering 2013

Page generated in 0.1594 seconds