• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2402
  • 1538
  • 1215
  • 21
  • 6
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 5417
  • 3057
  • 2923
  • 1264
  • 678
  • 674
  • 644
  • 627
  • 587
  • 566
  • 483
  • 471
  • 453
  • 433
  • 422
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

The two-machine open shop problem with time delays

Zhang, Li Chuan January 2012 (has links) (PDF)
Research on scheduling problems as we know it nowadays dates back to 1950s. Indeed, in 1954, Johnson described an exact algorithm to minimize the overall completion time (known as the makespan) for the two-machine flow shop problem. This resulted in a new discipline of operational research known as scheduling theory. Scheduling theory deals with the attempt of using a set of scarce resources in accomplishing variegated tasks in order to optimize one or several criteria. The resources and tasks are commonly referred to as machines and jobs, respectively. There is a broad spectrum of definitions for the tasks and resources. For example, the resources can take the form of production lines in workshops, airport runways, school classrooms, etc. Furthermore, the process of fulfilling a task in a given machine is sometimes called an operation. For instance, a task may be represented by the work-pieces processed on production lines, the aircrafts taking off and landing at airports, the teachers lecturing in classrooms and so on. Let us note at this stage that machines and jobs may have been characterized by many other factors such as speed, time of availability, duplication, etc for the former, and precedence constraints, due dates, time lags for the latter. These factors must be taken into consideration when formulating a scheduling strategy if we want to produce a realistic solution. Generally speaking, scheduling problems fall into the following three categories: A single machine model, parallel machines model and multi-operation model. The models known as multi-operation model are flow-shop, open shop and job shop. In addition, a scheduling solution is evaluated according to a given criterion or sometimes to a set of given criteria such as the minimization of makespan, mean finish time, number of tardy jobs, etc. This thesis is mainly concerned with the problems of minimizing the makespan criterion in a two-machine open shop environment with time delay considerations. In order to better approach the resolution of this problem, some basic concepts on scheduling theory and related knowledge, such as the theory of NP-completeness, have been introduced. It is important to analyze the advantages and disadvantages of different algorithms, in order to come up with an adequate solution. We presented in this dissertation the resolution of the two-machine open shop problem with time delays along the following lines. First, we start by looking at special cases to solve to optimality in polynomial time. Then, we move onto the design of heuristic algorithms, based on simple rules. Finally, we discuss two meta-heuristic algorithms and lower bounds, and undertake simulation experiments to analyze their performance in the average case. For heuristic algorithms, we discussed some approaches that evaluate their performance. Regarding the meta-heuristic approach, we used the simulated annealing and the tabu search algorithms, respectively. We then improved the second approach by adding the intensification and diversification strategies. Finally, we proposed a new hybrid approach to solve our original open shop problem in the hope to improve and speed up the quality of the solution.

Design of connectors in distributed system based on extended attribute-driven design method

Qi, Yan January 2012 (has links) (PDF)
Today, software architecture has received a lot of attention in software development process. In terms of architecture, component and connector are two key concepts to understand logical organization of software. Within the organization, components are required to be connected and configured into a system with exchanging data. In order to satisfy the requirements of connection, connectors provide components with mechanisms for communication, coordination, or cooperation among them. Particularly, in distributed systems, connectors play an important role in software architecture. However, in software development process, engineers often face more challenges of building connectors with high quality and full functionality, because of different level of understanding for connectors and lack of design models, as well as few approaches of maintenance. Some existing approaches are proposed to solve the connector problems. For example, Aspect-Oriented Programming is used to build connectors based on relationship of components; according to transmission of information between components, middleware solution is adopted to develop connectors. In this thesis we present a new way to define software connectors. This definition considers different aspects of connectors, especially for the connectors in distributed systems. By using the definition, software connectors are clearly and fully described in computer science. Additionally, it covers different knowledge areas for designing connectors, in particular the area of quality attributes of software. We create a new design approach which is called Extended Attribute-Driven Design method (EADD) for both architecture design and selection of developments tools. In order to design connectors, EADD has capability to drive architecture design for meeting the functional requirements and achieving the quality attributes. Based on EADD and the new definition of connector, we propose a model for designing connectors with aim of producing architecture design and selecting development tools. The model comprises Life Cycle Model (LCM) and Layered Design Model (LDM). They are meant to enhance high level design of connectors. Particularly, it has capability to organize a set of development tools for satisfying quality attributes of connectors. The model can be applied to design generic connectors in distributed systems, for example connectors in component-based distributed systems, connectors in service-oriented architecture (SOA), etc. In terms of the model, we provide typical approaches and tools in practice. Furthermore, we perform an analysis by combining our design model with existing approaches to highlight its advantages and we analyzes the effect of the design model on classification of connectors and attempts to make a new classification. At the end of this thesis, a case study is realized: design of connector for a push mail system in wireless network. In the case study, we show the design process of the architecture of connector and the selection of the related development tools based on our design model.

Une approche de reconnaissance d'activités utilisant la fouille de données temporelles

Moutacalli, Mohamed Tarik January 2012 (has links) (PDF)
L'assistance d'une personne atteinte de la maladie d'Alzheimer est une tâche difficile, coûteuse et complexe. La relation est souvent compliquée entre l'aidant et le patient qui souhaite préserver son intimité. Avec l'émergence du domaine de l'intelligence ambiante, les recherches se sont orientées vers une assistance automatisée qui consiste à remplacer l'assistant par un agent artificiel. Le plus grand défi de cette solution réside dans la reconnaissance, voire la prédiction, de l'activité du patient afin de l'aider, au besoin et au moment opportun, à son bon déroulement. Le nombre très élevé des activités de la vie quotidienne (AVQ) que le patient peut effectuer, ce que nous appelons aussi le nombre d'hypothèses, complique grandement cette recherche. En effet, comme chaque activité se compose de plusieurs actions, cette recherche se traduit donc par la recherche de la ou les actions effectuées par le patient parmi toutes les actions de toutes ses activités et ce, en un temps réel. Ce mémoire de maîtrise explore les techniques de la fouille de données temporelles afin de proposer une réponse à cette problématique en essayant de réduire au maximum, à un instant précis, le nombre d'hypothèses. Le travail débute par une analyse de l'historique des actions du patient pour créer des plans d'activités. Des plans propres à chaque patient et qui spécifient la liste ordonnée des actions qui composent une activité. Ensuite, une segmentation temporelle est effectuée sur chaque plan créant un ou plusieurs intervalles temporels résumant les périodes de commencement de l'activité. La troisième étape consiste à implémenter un système de reconnaissance d'activité pour trouver, à tout instant, l'activité la plus probable. Ce travail se base essentiellement sur l'aspect temporel et n'offre pas juste une solution pour la reconnaissance d'activité, mais il répond aussi aux erreurs d'initiations. Des erreurs susceptibles d'être commises par les malades d'Alzheimer et qui n'ont jamais été traitées par aucune autre recherche. Les tests et validations de notre approche ont été effectués avec des données réelles enregistrées après l'observation de réels malades d'Alzheimer et les résultats sont très satisfaisants.

Système intelligent d'aide au contrôle de procédé dans un four à arc

Girard-Nault, Jimmy January 2012 (has links) (PDF)
Le contrôle de procédé est une tâche difficile. C'est particulièrement le cas lorsque le système possède des non-linéarités, comme des réactions chimiques ou des imprévus. Nous retrouvons ce genre de procédé dans les fours à arc qui sont utilisés notamment dans la production d'acier et de ferrosilicium. En plus de faire face à plusieurs sources de nonlinéarités, nous sommes dans l'impossibilité de connaître et mesurer exactement ce qui se passe à l'intérieur du four. Le manque de contrôle et d'automatisme lié à ces problèmes amène les opérateurs à devoir prendre des décisions importantes qui auront un impact sur le déroulement du procédé. Puisqu'elles se basent la plupart du temps sur l'expérience et l'intuition des travailleurs, il est facile de déduire que les décisions prises ne seront pas toujours les plus efficaces. Par contre, la probabilité d'être en erreur est diminuée plus l'opérateur possède de l'expertise. Or, ce n'est pas tout le monde qui possède la même expertise et les ressources expérimentées sont rares et coûteuses. De plus, lors du départ à la retraite d'un employé, cette connaissance est souvent perdue. Dans le but d'améliorer le contrôle et l'automatisation du procédé, des auteurs proposent des solutions liées au contrôle de sous-systèmes. Ainsi, nous retrouvons un nombre assez limité de modèles qui permettent de contrôler la position des électrodes, d'améliorer l'efficacité énergétique, d'estimer la température du métal en fusion, etc. Mais ceux-ci sont limités à leurs soussystèmes respectifs. De plus, ils n'utilisent pas l'expertise des opérateurs et ils ne permettent pas de la conserver. Aussi, les solutions qui permettent d'améliorer le contrôle de façon globale se font très rares. Un système expert pourrait formaliser, codifier et conserver l'expertise du domaine des fours à arc (provenant entre autres des opérateurs) dans une base de connaissance. En la combinant avec de l'information comme des données du procédé en temps réel, nous l'utilisons dans le but de fournir des indications, de déduire des phénomènes et de devenir un puissant outil d'aide à la décision qui permet d'améliorer le contrôle du procédé dans son ensemble. Ce mémoire fournit la méthodologie et la connaissance requise pour mettre en oeuvre un système expert qui sert d'outil d'aide à la décision pour améliorer le contrôle de procédé dans les fours à arc. Elle propose des règles de production décrites à l'aide d'un formalisme de codification de l'expertise des opérateurs et des modèles s'appuyant sur des équations mathématiques. Ces derniers utilisent des données historiques et en temps réel comme les paramètres électriques. Ces deux éléments sont utilisés par le système expert dans le but de faire un diagnostic de l'état interne du four et de son mélange. Il permet entre autres de déduire s'il y a accumulation de métal, s'il y a la nécessité d'ajouter du carbone ou non dans le mélange et la position des électrodes. Ainsi, il permet d'améliorer le contrôle global du procédé, de conserver l'expertise du domaine et de l'utiliser pour effectuer ses déductions. Enfin, le système a été implémenté et validé concrètement en milieu industriel où nous produisons du ferrosilicium dans un four à arc. Ainsi, les modèles ont été validés individuellement en comparant le résultat obtenu avec la réalité sur une période de douze jours et les règles ont été testées par validation apparente et comparées avec les décisions prises par les experts.

The research on algorithm of image mosaic

Wang, Xin January 2008 (has links) (PDF)
Image based rendering (IBR) has been the most Important and rapid developed techniques in the computer graphics and virtual reality fields these years. Image mosaic which is one of the hot topics of IBR is also becoming research interests of many researchers in the image processing and computer vision fields. Its application covers the areas of virtual scene construction, remote sensing, medical image and military affairs etc. However, some difficult issues need to be studied further, including new optimization methods for image registration, new accelerating methods for image stitching etc, which are the main topics of this thesis. First, as the precision and automatic degree of image mosaic suffers from the algorithm of image registration, a new image stitching optimization method based on maximum mutual information is presented in this thesis. The main idea of the new method is to combine PSO algorithm with wavelet multiresolution strategy and parameters of PSO are adapted along with the resolution of the images.The experiments show that this method can avoid registration process to get stuck into local extremes in image interpolation operations and finds the optimal exchange through limited iterations computation, and obtain subpixel registration accuracy in process of image stitching. Secondly, to solve the problem of image blur stitching when the geometric deformation and the changes of the scale factor among images are serious, a new method based robust features matching is proposed in this thesis. Firest, it searches overlap area between two images by using phase correlation, and then detects Harris comer points in the overlap areas which are reduced to different scale images by adopting multi resolution pyramid construction. This method can solve the Harris arithmetic operator robust characteristics inspection algorithm for the scale effects. In order to improve the running performance of image feature matching, a dimension reduction method of high dimension feature vector is proposed based on PCA. And last the globe parameters are optimized by Lmeds to realize images mosaic. The experiments prove that the methods proposed by this thesis can reduce the computation cost with guarantee of image mosaic quality.

Nonstandard reamer CAD/CAE/CAM integrated system

Zhu, Shijun January 2006 (has links) (PDF)
The Chinese economy has increased quickly and continuously in recent years. Manufacturing is an important part of Chinese industry. The enhancement of modem manufacturing technology is becoming an important issue. The aim of the present thesis is to develop nonstandard (over-stiff) tools5 CAD/CAE/CAPP/CAM integrated system. This will involve integrating modern information technology, manufacturing technology and management technology. The thesis objective is to propose a totally new and integrated design, analysis and manufacturing system. This would provide to manufacturers a capability to carry out the incorporation of CAD/CAE/CAM systems. Many enterprises and universities have developed standard CAD system tools5 in recent years in China. But in regard to nonstandard complex CAD/CAM tools5, there is still much to be done. After more than one year investigation and research, a great quantity of information and data has been assembled, thus setting a solid foundation for completing this project successfully. This thesis considers in detail the problem associated with non-standard over stiff end milling cutters to illustrate the design approach used and its implementation. It uses a feature based model constructing technology and applies the CAD and finite element analysis together with relevant theories and technologies for processing and manufacturing. The thesis completes the development of non-standard complicated tools5 CAD \CAE \ CAM integrated system. The thesis consists of six chapters, dealing with the project's background, system's design and structural design requirements, finite element analysis, CAM, and some concluding remarks based on what was learned during the thesis work. The project's kernel technology is to use finite elements method to analyze the nonstandard complex tools' models using ANS YS large-scaled finite element analysis software.

The implementation of CRP (Capacity requirements planning) module

Bai, Hai January 2009 (has links) (PDF)
ERP (Enterprise Resource Planning) was originated as a management concept based on the supply chain to provide a solution to balance the planning of production within an enterprise. This article focuses on the CRP module in ERP, describes in detail the concept, content and modularization of CRP as well as its usefulness and application in the production process. The function of CRP module is to expand orders to production process to calculate production time, then to adjust the load or accumulative load of each center or machine accordingly. The function described in the article tries to load production time used as load to an unlimited extend, then to use auto-balance or normal adjustment to determine the feasibility of the production order planning.

Research on distributed data mining system and algorithm based on multi-agent

Jiang, Lingxia January 2009 (has links) (PDF)
Data mining means extracting hidden, previous unknown knowledge and rules with potential value to decision from mass data in database. Association rule mining is a main researching area of data mining area, which is widely used in practice. With the development of network technology and the improvement of level of IT application, distributed database is commonly used. Distributed data mining is mining overall knowledge which is useful for management and decision from database distributed in geography. It has become an important issue in data mining analysis. Distributed data mining can achieve a mining task with computers in different site on the internet. It can not only improve the mining efficiency, reduce the transmitting amount of network data, but is also good for security and privacy of data. Based on related theories and current research situation of data mining and distributed data mining, this thesis will focus on analysis on the structure of distributed mining system and distributed association rule mining algorithm. This thesis first raises a structure of distributed data mining system which is base on multi-agent. It adopts star network topology, and realize distributed saving mass data mining with multi-agent. Based on raised distributed data mining system, this these brings about a new distributed association rule mining algorithm?RK-tree algorithm. RK-tree algorithm is based on the basic theory of twice knowledge combination. Each sub-site point first mines local frequency itemset from local database, then send the mined local frequency itemset to the main site point. The main site point combines those local frequency itemset and get overall candidate frequency itemset, and send the obtained overall candidate frequency itemset to each sub-site point. Each sub-site point count the supporting rate of those overall candidate frequency itemset and sent it back to the main site point. At last, the main site point combines the results sent by sub-site point and gets the overall frequency itemset and overall association rule. This algorithm just needs three times communication between the main and sub-site points, which greatly reduces the amount and times of communication, and improves the efficiency of selection. What's more, each sub-site point can fully use existing good centralized association rule mining algorithm to realize local association rule mining, which can enable them to obtain better local data mining efficiency, as well as reduce the workload. This algorithm is simple and easy to realize. The last part of this thesis is the conclusion of the analysis, as well as the direction of further research.

Research on detecting mechanism for Trojan horse based on PE file

Pan, Ming January 2009 (has links) (PDF)
As malicious programs, Trojan horses have become a huge threat to computer networks security. Trojan horses can easily cause loss, damage or even theft of data because they are usually disguised as something useful or desirable, and are always mistakenly activated by computer users, corporations and other organizations. Thus, it is important to adopt an effective and efficient method to detect the Trojan horses, and the exploration of a new method of detection is of greater significance. Scientists and experts have tried many approaches to detecting Trojan horses since they realized the harms of the programs. Up to now, these methods fall mainly into two categories [2]. The first category is to detect Trojan horses through checking the port of computers since the Trojan horses send out message through computer ports [2]. However, these methods can only detect the Trojan horses that are just working when detected. The second class is to detect Trojan horses by examining the signatures of files [2] [19], in the same way as people deal with computer virus. As new Trojan horses may contain unknown signatures, methods in this category may not be effective enough when new and unknown Trojan horses appear continuously, sending out unknown signatures that escape detection. For the above-mentioned reasons, without exception, there are limitations in the existing methods if the un-awakened and unknown Trojan horses are to be detected. This thesis proposes a new method that can detect un-awakened and unknown Trojan horses- the detection by using of a file's static characteristics. This thesis takes PE file format as the object of the research, because approximately 75% of personal computers worldwide are installed the Microsoft Windows [4], and that Trojan horses usually exist as a Portable Executable (PE) file in the Windows platform. Based on the (PE) file format, my research gets all the static information of each part of PE file which is characteristic of a file. Then, this static information is analyzed by the intelligent information processing techniques. Next, a detection model is established to estimate whether a PE file is a Trojan horse. This model can detect the unknown Trojan horses by analyzing static characteristics of a file. The information that is used to verify detecting model is new and unknown to the detecting model; in other words, the information is not used during the training of the model. The thesis is organized as follows. First, this thesis discusses the limitations of traditional detection techniques, related works of research, and a new method to detect Trojan horse based on file's static information. Second, the thesis focuses on the research of the Trojan horse detecting models, covering the extracting of the static information from PE file, choice of intelligent information processing techniques, and setting up the Trojan horse detecting model. Lastly, the thesis discusses the direction of future research in this field.

Analysis of credit card data based on data mining technique

Zheng, Ying January 2009 (has links) (PDF)
In recent years, large amounts of data have accumulated with the application of database systems. Meanwhile, the requirements of applications have not been confined in the simple operations, such as search and retrieval, because these operations were not helpful in finding the valuable information from the databases. The hidden knowledge is hard to be handled by the present database techniques, so a great wealth of knowledge concealed in the databases is not developed and utilized mostly. Data mining aimed at finding the essential significant knowledge by automatic process of database. DM technique was one of the most challenging studies in database and decision-making fields. The data range processed was considerably vast from natural science, social science, business information to the data produced from scientific process and satellite observation. The present focuses of DM were changed from theories to practical application. Where the database existed, there were many projects about DM to be studied on. The paper concentrated on the research about data information in credit card by DM theories, techniques and methods to mine the valuable knowledge from the card. Firstly, the basic theories, key algorithms of DM techniques were introduced. The emphases were focused on the decision tree algorithms, neural networks, X-means algorithm in cluster and Apriori algorithm in association rule by understanding the background of bank and analyzing the knowledge available in the credit card. A preliminary analysis of credit card information, Industry and Business Bank at Tianjin Department, was performed based on the conversion and integration of data warehouse. The combined databases including information of customers and consumptive properties were established in accordance with the idea of data-warehouse. The data were clustered by iT-means algorithm to find valuable knowledge and frequent intervals of transaction in credit card. Back propagation neural networks were designed to classify the information of credit card, which played an important role in evaluation and prediction of customers. In addition, the Apriori algorithm was achieved to process the abovementioned data, which could establish the relations between credit information of customers and consumption properties, and to find the association rule among credit items themselves, providing a solid foundation for further revision of information evaluation. Our work showed that DM technique made great significance in analyzing the information of credit card, and laid down a firm foundation for further research in the retrieval information from the credit card.

Page generated in 0.081 seconds