761 |
Analyse et optimisation d'efficacité de réseaux manufacturiers complexesJlili, Mohamed Malek 19 April 2018 (has links)
Les travaux de ce mémoire portent sur l'analyse et la conception optimale de systèmes manufacturiers composés de machines non fiables. Les systèmes considérés peuvent opérer selon une structure réseau d’assemblage/désassemblage. Des stocks tampons sont placés entre les machines pour les découpler les unes des autres. Ces machines peuvent opérer en mode de fonctionnement dégradé. Chaque machine est modélisée comme un système à trois états : fonctionnement nominal, panne totale et mode dégradé. On considère que le mode de fonctionnement dégradé affecte uniquement le taux de production nominal des machines et non la qualité des pièces produites. Afin d’évaluer le taux de production d’un tel réseau manufacturier à machines multi-états (dit complexe), une méthode d’évaluation analytique est tout d’abord explorée. Cette méthode consiste à remplacer chaque machine par une machine équivalente à deux états, puis à appliquer ensuite une des méthodes existantes pour les réseaux avec machines binaires. Après avoir découvert que cette méthode est imprécise même dans le cas simple de deux machines multi-états séparées par un stock, nous avons utilisé une simulation à base du logiciel Simio en vue d’une conception optimale du réseau. Dans cette conception, il est question de faire une sélection conjointe des technologies des machines et des tailles de stocks. L’objectif de l’optimisation est de maximiser le taux de production sous des contraintes de budget. La plupart des travaux existants considèrent le problème d’allocation des stocks tampons pour des lignes séries ou séries-parallèles, en considérant que les technologies des machines sont déjà choisies. L’extension ainsi développée est validée en utilisant différentes instances générées aléatoirement. Pour ce faire, le modèle de simulation développé est couplé à deux méthodes d’optimisation. La première méthode utilise l’outil d'optimisation OptQuest. La seconde méthode est une nouvelle heuristique basée sur un algorithme génétique (AG). Dans chacune des méthodes, l’outil d’optimisation se sert de l’estimation du taux de production effectuée par l’outil de simulation dan #s sa fonction d’objectif. Notre nouvelle méthode (simulation/AG) est comparée à une approche couplant une méthode analytique à un AG dans le cas de machines binaires. Les résultats numériques obtenus illustrent l’efficacité de notre méthode au niveau de la qualité des solutions, au détriment d’un temps de calcul moins performant. / This thesis focuses on the analysis and optimal design of manufacturing systems composed of unreliable machinery. The considered systems can operate in an assembly / disassembly structure. Buffer stocks are placed between the machines in order to decouple them from each other. These machines can operate in degraded mode. Each machine is represented as a system with three states: nominal operation, blackout and a degraded mode. We consider that the degraded mode affects only the nominal production rate of machines and not the quality of the parts produced. To assess the rate of production of such a manufacturing system with multi-state machine (called complex), an analytical method is first explored. This method consists on replacing each machine by an equivalent one with two states, and then applying one of the classical methods for networks with binary state machines. After discovering the lack of precision of this method, we used a simulation method based on the software Simio for the optimal design of networks with multi-state machines. In this design, it is about making a joint selection of technologies and buffer sizes between machines. The objective of the optimization is to maximize the rate of production under budget constraints. Most existing works consider the problem of allocating buffer stocks for serial lines or series-parallel when machine technologies are already chosen. Our method is developed and validated using different randomly generated instances. To do this, the developed simulation model is coupled with two optimization methods. The first method uses the OptQuest optimization tool. The second method is a new heuristic based on a genetic algorithm (GA). In each method, the optimizer uses the production rate estimation carried out by the simulation tool in its objective function. Our new method (simulation / GA) is compared to an approach coupling an analytical method to a GA in the case of binary machines. The numerical results illustrate the effectiveness of our method in terms of solution quality at the expense of the less efficient computation time.
|
762 |
Three Essays in Parallel Machine SchedulingGarg, Amit January 2008 (has links)
No description available.
|
763 |
On analytical modeling and design of a novel transverse flux generator for offshore wind turbinesSvechkarenko, Dmitry January 2007 (has links)
The object of this thesis is to develop a cost effective direct-driven wind generator suited for offshore wind turbines. As the generator price is a complicated function dependent on many parameters, the emphasis is mainly put on reduction of the weight of active materials, such as copper, laminated steel, permanent magnets, and electrical insulation. The higher specific torque and power density of a transverse flux permanent magnet (TFPM) machine in comparison to conventional radial-flux machines make it a promising solution for direct-driven wind turbine generators. The novel TFPM generator investigated in this work due to its possibly more compact construction would allow a better utilization of the available nacelle space. The analytical model, including evaluation of the synchronous inductance, is developed and applied in parametric study of a 5 MW wind turbine generator. The influence of the design variables with respect to the analyzed characteristics is investigated. A number of machines that have approximately the same performances are found. These machines are compared and the optimal ranges for the main parameters are suggested. One possible design topology is presented in more details with dimensions and main characteristics. This generator is compared with radial-flux generators with surface-mounted and tangentially-polarized magnets. It is found that the analyzed TFPM generator would favor a smaller outer diameter, reduced total active weight, and reduced weight of the magnet material. The TFPM would however require a longer axial length. TFPM generators with a broader range of output power have also been investigated. Generators rated 3, 5, 7, 10, and 12 MW are analyzed and their characteristics with respect to the output power are compared. The novel transverse flux topology has been found to be promising for low-speed hightorque applications, such as direct-driven wind turbines in the multi-megawatt range. / QC 20101118
|
764 |
Possibilités de mécanisation agricole dans le delta du fleuve SénégalN'Dir, Massaër 26 August 2020 (has links)
Québec Université Laval, Bibliothèque 2020
|
765 |
Comparing the scaffolding provided by physical and virtual manipulative for students' understanding of simple machinesChini, Jacquelyn J. January 1900 (has links)
Doctor of Philosophy / Department of Physics / Nobel S. Rebello / Conventional wisdom has long advised that students’ learning is best supported by interaction with physical manipulative. Thus, in the physics laboratory, students typically spend their time conducting experiments with physical equipment. However, computer simulations offer a tempting alternative to traditional physical experiments. In a virtual experiment, using a computer simulation, students can gather data quickly, and measurement errors and frictional effects can be explicitly controlled. This research investigates the relative support for students’ learning offered by physical and virtual experimentation in the context of simple machines.
Specifically, I have investigated students’ learning as supported by experimentation with physical and virtual manipulative from three different angles-- what do students learn, how do students learn, and what do students think about their learning.
The results indicate that the virtual manipulative better supported students’ understanding of work and potential energy than the physical manipulative did. Specifically, in responding to data analysis questions, students who used the virtual manipulative before the physical manipulative were more likely to describe work as constant across different lengths of frictionless inclined planes (or pulley systems) and were more likely to adequately compare work and potential energy, whereas students who used the physical manipulative first were more likely to talk about work and potential energy separately. On the other hand, no strong support was found to indicate that the physical manipulative better supported students’ understanding of a specific concept.
In addition, students’ responses to the survey questions indicate that students tend to value data from a computer simulation more than from a physical experiment. The interview analysis indicates that the virtual environment better supported the students to create new ideas than the physical environment did.
These results suggest that the traditional wisdom that students learn best from physical experiments is not necessarily true. Thus, researchers should continue to investigate how to best interweave students’ experiences with physical and virtual manipulatives. In addition, it may be useful for curriculum designers and instructors to spend more of their efforts designing learning experiences that make use of virtual manipulatives.
|
766 |
Expressing Interactivity with States and ConstraintsOney, Stephen William-Lucas 01 April 2015 (has links)
A Graphical User Interface (GUI) is defined by its appearance and its behavior. A GUI’s behavior determines how it reacts to user and system events such as mouse, keyboard, or touchscreen presses, or changes to an underlying data model. Although many tools are effective in enabling designers to specify a GUI’s appearance, defining a custom behavior is difficult and error-prone. Many of the difficulties developers face in defining GUI behaviors are the result of their reactive nature. The order in which GUI code is executed depends upon the order in which it receives external inputs. Most widely used user interface programming frameworks use an event-callback model, where developers define GUI behavior by defining callbacks—sequences of low-level actions—to take in reaction to events. However, the event-callback model for user-interface development has several problems, many of which have been identified long before I started work on this dissertation. First, it is disorganized: the location and order of event-callback code often has little correspondence with the order in which it will be executed. Second, it divides GUI code in a way that requires writing interdependent code to keep the interface in a consistent state. This is because maintaining a consistent state requires referencing and modifying the same state variables across multiple different callbacks, which are often distributed throughout the code. In this dissertation, I will introduce a new framework for defining GUI behavior, called the stateconstraint framework. This framework combines constraints—which allow developers to define relationships among interface elements that are automatically maintained by the system—and state machines—which track the status of an interface. In the state-constraint framework, developers write GUI behavior by defining constraints that are enforced when the interface is in specific states. This framework allows developers to specify more nuanced constraints and allows the GUI’s appearance and behavior to vary by state. I created two tools using the state-constraint framework: a library for Web developers (ConstraintJS) and an interactive graphical language (InterState). ConstraintJS provides constraints that can be used both to control content and control display, and integrates these constraints with the three Web languages—HTML, CSS, and JavaScript. ConstraintJS is designed to take advantage of the declarative syntaxes of HTML and CSS: It allows the majority of an interactive behavior to be expressed concisely in HTML and CSS, rather than requiring the programmer to write large amounts of JavaScript. InterState introduces a visual notation and live editor to clearly represent how states and constraints combine to define GUI behavior. An evaluation of InterState showed that its computational model, visual notation, and editor were effective in allowing developers to define GUI behavior compared to conventional event-callback code. InterState also introduces extensions to the state-constraint framework to allow developers to easily re-use behaviors and primitives for authoring multi-touch gestures.
|
767 |
Classification of Hate Tweets and Their Reasons using SVMTarasova, Natalya January 2016 (has links)
Denna studie fokuserar på att klassificera hat-meddelanden riktade mot mobiloperatörerna Verizon, AT&T and Sprint. Huvudsyftet är att med hjälp av maskininlärningsalgoritmen Support Vector Machines (SVM) klassificera meddelanden i fyra kategorier - Hat, Orsak, Explicit och Övrigt - för att kunna identifiera ett hat-meddelande och dess orsak. Studien resulterade i två metoder: en "naiv" metod (the Naive Method, NM) och en mer "avancerad" metod (the Partial Timeline Method, PTM). NM är en binär metod i den bemärkelsen att den ställer frågan: "Tillhör denna tweet klassen Hat?". PTM ställer samma fråga men till en begränsad mängd av tweets, dvs bara de som ligger inom ± 30 min från publiceringen av hat-tweeten. Sammanfattningsvis indikerade studiens resultat att PTM är noggrannare än NM. Dock tar den inte hänsyn till samtliga tweets på användarens tidslinje. Därför medför valet av metod en avvägning: PTM erbjuder en noggrannare klassificering och NM erbjuder en mer utförlig klassificering. / This study focused on finding the hate tweets posted by the customers of three mobileoperators Verizon, AT&T and Sprint and identifying the reasons for their dissatisfaction. The timelines with a hate tweet were collected and studied for the presence of an explanation. A machine learning approach was employed using four categories: Hate, Reason, Explanatory and Other. The classication was conducted with one-versus-all approach using Support Vector Machines algorithm implemented in a LIBSVM tool. The study resulted in two methodologies: the Naive method (NM) and the Partial Time-line Method (PTM). The Naive Method relied only on the feature space consisting of the most representative words chosen with Akaike Information Criterion. PTM utilized the fact that the majority of the explanations were posted within a one-hour time window of the posting of a hate tweet. We found that the accuracy of PTM is higher than for NM. In addition, PTM saves time and memory by analysing fewer tweets. At the same time this implies a trade-off between relevance and completeness. / <p>Opponent: Kristina Wettainen</p>
|
768 |
Statistical models for natural scene dataKivinen, Jyri Juhani January 2014 (has links)
This thesis considers statistical modelling of natural image data. Obtaining advances in this field can have significant impact for both engineering applications, and for the understanding of the human visual system. Several recent advances in natural image modelling have been obtained with the use of unsupervised feature learning. We consider a class of such models, restricted Boltzmann machines (RBMs), used in many recent state-of-the-art image models. We develop extensions of these stochastic artificial neural networks, and use them as a basis for building more effective image models, and tools for computational vision. We first develop a novel framework for obtaining Boltzmann machines, in which the hidden unit activations co-transform with transformed input stimuli in a stable and predictable way throughout the network. We define such models to be transformation equivariant. Such properties have been shown useful for computer vision systems, and have been motivational for example in the development of steerable filters, a widely used classical feature extraction technique. Translation equivariant feature sharing has been the standard method for scaling image models beyond patch-sized data to large images. In our framework we extend shallow and deep models to account for other kinds of transformations as well, focusing on in-plane rotations. Motivated by the unsatisfactory results of current generative natural image models, we take a step back, and evaluate whether they are able to model a subclass of the data, natural image textures. This is a necessary subcomponent of any credible model for visual scenes. We assess the performance of a state- of-the-art model of natural images for texture generation, using a dataset and evaluation techniques from in prior work. We also perform a dissection of the model architecture, uncovering the properties important for good performance. Building on this, we develop structured extensions for more complicated data comprised of textures from multiple classes, using the single-texture model architecture as a basis. These models are shown to be able to produce state-of-the-art texture synthesis results quantitatively, and are also effective qualitatively. It is demonstrated empirically that the developed multiple-texture framework provides a means to generate images of differently textured regions, more generic globally varying textures, and can also be used for texture interpolation, where the approach is radically dfferent from the others in the area. Finally we consider visual boundary prediction from natural images. The work aims to improve understanding of Boltzmann machines in the generation of image segment boundaries, and to investigate deep neural network architectures for learning the boundary detection problem. The developed networks (which avoid several hand-crafted model and feature designs commonly used for the problem), produce the fastest reported inference times in the literature, combined with state-of-the-art performance.
|
769 |
Abstract interpretation and optimising transformations for applicative programsMycroft, Alan January 1982 (has links)
This thesis describes methods for transforming applicative programs with the aim of improving their efficiency. The general justification for these techniques is presented via the concept of abstract interpretation. The work can be seen as providing mechanisms to optimise applicative programs for sequential von Neumann machines. The chapters address the following subjects. Chapter 1 gives an overview and gentle introduction to the following technical chapters. Chapter 2 gives an introduction to and motivation for the concept of abstract interpretation necessary for the detailed understanding of the rest of the work. It includes certain theoretical developments, of which I believe the most important is the incorporation of the concept of partial functions into our notion of abstract interpretation. This is done by associating non-standard denotations with functions just as denotational semantics gives the standard denotations. Chapter 3 gives an example of the ease with which we can talk about function objects within abstract interpretive schemes. It uses this to show how a simple language using call-by-need semantics can be augmented with a system that annotates places in a program at which call-by-value can be used without violating the call-by-need semantics. Chapter 4 extends the work of chapter 3 by showing that under some sequentiality restriction, the incorporation of call-by-value for call-by-need can be made complete in the sense that the resulting program will only possess strict functions except for the conditional. Chapter 5 is an attempt to apply the concepts of abstract interpretation to a completely different problem, that of incorporating destructive operators into an applicative program. We do this in order to increase the efficiency of implementation without violating the applicative semantics by introducing destructive operators into our language. Finally, chapter 6 contains a discussion of the implications of such techniques for real languages, and in particular presents arguments whereby applicative languages should be seen as whole systems and not merely the applicative subset of some larger language.
|
770 |
A Dynamic Behavioral Biometric Approach to Authenticate Users Employing Their Fingers to Interact with Touchscreen DevicesPonce, Arturo 01 May 2015 (has links)
The use of mobile devices has extended to all areas of human life and has changed the way people work and socialize. Mobile devices are susceptible to getting lost, stolen, or compromised. Several approaches have been adopted to protect the information stored on these devices. One of these approaches is user authentication. The two most popular methods of user authentication are knowledge based and token based methods but they present different kinds of problems.
Biometric authentication methods have emerged in recent years as a way to deal with these problems. They use an individual’s unique characteristics for identification and have proven to be somewhat effective in authenticating users. Biometric authentication methods also present several problems. For example, they aren’t 100% effective in identifying users, some of them are not well perceived by users, others require too much computational effort, and others require special equipment or special postures by the user. Ultimately their implementation can result in unauthorized use of the devices or the user being annoyed by the implementation.
New ways of interacting with mobile devices have emerged in recent years. This makes it necessary for authentication methods to adapt to these changes and take advantage of them. For example, the use of touchscreens has become prevalent in mobile devices, which means that biometric authentication methods need to adapt to it. One important aspect to consider when adopting these new methods is their acceptance of these methods by users. The Technology Acceptance Model (TAM) states that system use is a response that can be predicted by user motivation.
This work presents an authentication method that can constantly verify the user’s identity which can help prevent unauthorized use of a device or access to sensitive information. The goal was to authenticate people while they used their fingers to interact with their touchscreen mobile devices doing ordinary tasks like vertical and horizontal scrolling. The approach used six biometric traits to do the authentication. The combination of those traits allowed for authentication at the beginning and at the end of a finger stroke. Support Vector Machines were employed and the best results obtained show Equal Error Rate values around 35%. Those results demonstrate the potential of the approach to verify a person’s identity.
Additionally, this works tested the acceptance of the approach among participants, which can influence its eventual adoption. An acceptance level of 80% was obtained which compares favorably against other behavioral biometric approaches.
|
Page generated in 0.0683 seconds