• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 11
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 14
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An Automated Defect Detection Approach For Cosmic Functional Size Measurement Method

Yilmaz, Gokcen 01 September 2012 (has links) (PDF)
Software size measurement provides a basis for software project management and plays an important role for its activities such as project management estimations, process benchmarking, and quality control. As size can be measured with functional size measurement (FSM) methods in the early phases of the software projects, functionality is one of the most frequently used metric. On the other hand, FSMs are being criticized by being subjective. The main aim of this thesis is increasing the accuracy of the measurements, by decreasing the number of defects concerning FSMs that are measured by COSMIC FSM method. For this purpose, an approach that allows detecting defects of FSMs automatically is developed. During the development of the approach, first of all error classifications are established. To detect defects of COSMIC FSMs automatically, COSMIC FSM Defect Detection Approach (DDA) is proposed. Later, based on the proposed approach, COSMIC FSM DDT (DDT) is developed.
12

Distributed Traffic Load Scheduler based on TITANSim for System Test of a Home Subscriber Server (HSS)

Kalaichelvan, Niranjanan January 2011 (has links)
The system test is very significant in the development life cycle of a telecommunication network node. Tools such as TITANSim are used to develop the test framework upon which a load test application is created. These tools need to be highly efficient and optimized to reduce the cost of the system test. This thesis project created a load test application based on the distributed scheduling architecture of TITANSim, whereby multiple users can be simulated using a single test component. This new distributed scheduling system greatly reduces the number of operating system processes involved, thus reducing the memory consumption of the load test application; hence higher loads can be easily simulated with limited hardware resources. The load test application used for system test of the HSS is based on the central scheduling architecture of TITANSim. The central scheduling architecture is a function test concept, where every user is simulated by a single test component. In the system test several thousand users are simulated by the test system. Therefore, the load application based on central scheduling architecture uses thousands of test components leading to high memory consumption in the test system. In this architecture, the scheduling of test components is centralized which results in a lot of communication overhead within the test system, as thousands of test components communicate with a master scheduling component during the test execution. On the other hand, in the distributed scheduling architecture the scheduling task is performed locally by each test component. There is no communication overhead within the test system. Therefore, the test system is highly efficient. In the distributed scheduling architecture the traffic flow of the simulated users are described using the Finite State Machines (FSMs). The FSMs are specified in the configuration files that are used by the test system at run time. Therefore, implementing traffic cases using the distributed scheduling architecture becomes simpler and faster as there is no (TTCN-3) coding/compilation. The HSS is the only node (within Ericsson) whose system test is performed using the central scheduling architecture of TITANSim. The other users (nodes) of TITANSim are using the distributed scheduling architecture for its apparent benefits. Under this circumstance, this thesis project assumes significance for the HSS. When a decision to adapt the distributed scheduling architecture is made for the system test of the HSS, the load application created in this thesis project can be used as a model, or extended for the migration of the test modules for the HSS from the central scheduling architecture to the distributed scheduling architecture. By creating this load application we have gained significant knowledge of the TITANSim framework; most importantly, the necessary modifications to the TITANSim framework required to create a distributed scheduling architecture based load application for the HSS. The load application created for this project was used to (system) test the HSS by generating load using real system test hardware. The results were analytically compared with the test results from the existing load application (which is based on the central scheduling architecture). The analysis showed that the load application based on distributed scheduling architecture is efficient, utilizes less test system resources, and capable of scaling up the load generation capacity / Systemet test är mycket betydelsefullt i utvecklingen livscykeln för ett telenät nod.Verktyg som TITANSim används för att utveckla testet ram på vilken ett belastningsprov program skapas. Dessa verktyg måste vara mycket effektiv och optimerad för att minska kostnaderna för systemet testet. Detta examensarbete skapat ett program belastningsprov bygger på distribuerad schemaläggning arkitektur TITANSim, där flera användare kan simuleras med hjälp av ett enda test komponent. Det nya distribuerade schemaläggning systemet minskar kraftigt antalet operativsystem inblandade system processer, vilket minskar minnesförbrukning av lasten testprogram, därav högre belastningar kan enkelt simuleras med begränsade hårdvara resurser. Lasten testa program som används för systemtest av HSS är baserad på den centrala schemaläggning arkitektur TITANSim. Den centrala schemaläggning arkitektur är ett funktionstest koncept, där varje användare simuleras med ett enda test komponent. I systemet testa flera tusen användare är simulerade av testsystemet.Därför använder belastningen program baserat på centrala schemaläggning arkitektur tusentals testa komponenter leder till hög minnesförbrukning i testsystemet.I denna arkitektur är schemaläggning av test komponenter centraliserad vilket resulterar i en mycket kommunikation overhead inom testsystem, som tusentals testa komponenter kommunicerar med en mästare schemaläggning komponent under testexekvering. Å andra sidan, i den distribuerade schemaläggning arkitekturen schemaläggning uppgiften utförs lokalt av varje test komponent. Det finns ingen kommunikation overhead i testsystemet. Därför är testsystemet mycket effektiv. I distribuerad schemaläggning arkitekturen trafikflödet av simulerade användare beskrivs med Finite State Machines (FSMs). Den FSMs anges i konfigurationsfiler som används av testsystemet vid körning. Därför genomföra trafiken fall med distribuerad schemaläggning arkitektur blir enklare och snabbare eftersom det inte finns någon (TTCN-3) kodning / sammanställning. HSS är den enda nod (inom Ericsson) vars system test utförs med hjälp av den centrala schemaläggningen arkitektur TITANSim. Den andra användare (noder) i TITANSim använder distribuerad schemaläggning arkitektur för sina uppenbara fördelar. Under denna omständighet, förutsätter detta examensarbete betydelse för HSS. När ett beslut att anpassa distribuerad schemaläggning arkitektur är gjord för systemet test av HSS, kan belastningen program som skapats i detta examensarbete kan användas som en modell, eller förlängas för migration av testet moduler för HSS från den centrala schemaläggningen arkitektur för distribuerade schemaläggning arkitektur. Genom att skapa denna belastning ansökan har vi fått stor kunskap om TITANSim ramen, viktigast av allt, de nödvändiga ändringar av TITANSim ramverk som krävs för att skapa en distribuerad schemaläggning arkitektur baserad belastning ansökan för HSS. Lasten program som skapats för detta projekt har använts för att (system) testa HSS genom att generera last använda riktiga maskinvarusystem test. Resultaten analytiskt jämfört med provresultaten från den befintliga belastningen ansökan (som är baserad på den centrala schemaläggning arkitektur). Analysen visade att belastningen ansökan baseras på distribuerad schemaläggning arkitektur är effektiv, använder mindre resurser testsystem, och kan skala upp kapaciteten last generation
13

Godshantering av flygunderhållsartiklar - En värdeflödesanalys ur ett förbättringsperspektiv / Aircraft MRO logistics - A value stream analysis of continuous improvement

Stjernberg, Niclas January 2013 (has links)
Saab Component Maintenance in Linköping, Sweden, offers maintenance, repair and overhaul (MRO) services for civilian and military aircraft components. Lately, the department has struggled with slow throughput rates, despite various counteracting attempts. Studies show, that large parts of the delays derive from the logistics department where goods arrive and dispatch. Therefore, Saab wants to carry out an extensive analysis of the department in order to further investigate what is causing its slow throughput rates. The thesis begins with extensive mapping over Component Maintenances whole value stream to find out how departments interact with each other and which role individual departments have in the total supply chain. Out of 13 000 different part numbers, components were divided into five product families. From statistical history of package frequency, two value streams were chosen as scientific research objects, where improvements of which show great potential to have serious positive effects. From activity studies, observations, workshops and interviews various elements were identified as obstructing in the logistics department’s material and information flow. Activity studies show, that 55, 8% of activities performed in the arriving goods department were considered as value adding time. In addition, 67, 9% of activities performed in the dispatch goods department were considered as value adding time (where over half of the value adding time was spent in administrative systems). 11 critical challenges and five associated root causes have been identified in the arriving goods department. 14 critical challenges and five associated root causes have been identified in the dispatch goods department. To reduce and prevent further waste, the thesis recommends 14 critical actions in order to reduce the amount of elements obstructing the flow of material and information at Component Maintenance. In addition, recommended changes have been illustrated in a Future State Map for each department respectively on a process level. By performing tasks in new sequences where material and information flow progresses in a parallel synchronized rate, the benefits of a balanced lean flow could be demonstrated in the dispatch department. Shortest lead time, from finished component to packaged and booked shipment, was noted for small units and took 20 minutes.
14

Uma estratégia para a minimização de máquinas de estados finitos parciais / An approach to incompletely specified finite state machine minimization

Alberto, Alex Donizeti Betez 22 April 2009 (has links)
Máquinas de Estados Finitos, além de suas inúmeras aplicações, são amplamente utilizadas na Engenharia de Software para modelar especificações de sistemas. Nesses modelos, projetistas podem inserir, inadvertidamente, estados redundantes, ou seja, que exibem o mesmo comportamento. A eliminação desses estados traz diversos benefícios para as atividades que utilizam o modelo, como menor complexidade e menos recursos físicos para implementação. O processo de eliminação desses estados é denominado minimização, e pode ser realizado em tempo polinomial para máquinas completamente especificadas. Por outro lado, a minimização de máquinas parciais, cuja especificação não cobre todo o domínio de entrada, somente pode ser obtida em tempo polinomial com o uso de abordagens não determinísticas, ou seja, trata-se de um problema NP-Completo. Este trabalho apresenta uma estratégia para a minimização de máquinas de estados finitos parciais que faz o uso de heurísticas e otimizações para tornar o processo mais eficiente. Visando mensurar tal ganho de eficiência, foram realizados experimentos, nos quais os tempos de execução de uma implementação do método proposto foram medidos, juntamente com os tempos de implementações de dois outros métodos conhecidos. Os resultados mostraram vantagens significativas de performance para o novo método em relação aos métodos anteriores / Finite State Machines are largely used on Software Engineering to model systems specifications. In these models, designers may inadvertently include redundant states, i.e., states which exhibit the same input/output behavior. The absence of such states brings benefits to the modeling activities, reducing the complexity and taking less physical resources on implementations. The process of eliminating redundant states is known as minimization, and can be accomplished in polynomial time for completely specified machines. On the other hand, the minimization of partially specified machines, i.e., machines which have undefined behavior for some inputs, can only be done in polynomial time when non-deterministic approaches are applied. It is a known NP-Complete problem. This work presents a deterministic approach to minimize incompletely specified Finite State Machines, using heuristics and optimizations to accomplish the task more efficiently. In order to measure the performance improvements, experiments were done, observing the running time of an implementation of the proposed method, along with running times of implementations of two other known methods. The results revealed a significant performance advantage when using the proposed approach
15

En optimierande kompilator för SMV till CLP(B) / An optimising SMV to CLP(B) compiler

Asplund, Mikael January 2005 (has links)
<p>This thesis describes an optimising compiler for translating from SMV to CLP(B). The optimisation is aimed at reducing the number of required variables in order to decrease the size of the resulting BDDs. Also a partitioning of the transition relation is performed. The compiler uses an internal representation of a FSM that is built up from the SMV description. A number of rewrite steps are performed on the problem description such as encoding to a Boolean domain and performing the optimisations. </p><p>The variable reduction heuristic is based on finding sub-circuits that are suitable for reduction and a state space search is performed on those groups. An evaluation of the results shows that in some cases the compiler is able to greatly reduce the size of the resulting BDDs.</p>
16

Design and Implementation of Single Issue DSP Processor Core

Ravinath, Vinodh January 2007 (has links)
<p>Micro processors built specifically for digital signal processing are DSP processors. DSP is one of the core technologies in rapidly growing applications like communications and audio processing. The estimated growth of DSP processors in the last 6 years is over 40%. The variety of DSP capable processors for various applications also increased with the rising popularity of DSP processors. The design flow and architecture of such processors are not commonly available to students for learning.</p><p>This report is a structured approach to design and implementation of an embedded DSP processor core for voice, audio and video codec. The report focuses on the design requirement specification, senior instruction set and assembly manual release, micro architecture design and implementation of the core. Details about the core verification are also included in this report. The instruction set of this processor supports running basic kernels of BDTI benchmarking.</p>
17

A Novel Financial Service Model in Private Cloud

Saha, Ranjan 14 January 2014 (has links)
In this thesis, we propose architecture for a SaaS model in Cloud that would provide service to the financial investors who are not familiar with various mathematical models. Such finance models are used to evaluate financial instruments, for example, to price a derivative that is currently being traded before entering into a contact. An investor may approach CSP to price a particular derivative and specify the time, budget and accuracy constraints. Based on these constraints specified by investors, the service provider will compute the option value using our proposed FSM. To evaluate our proposed model, we compared pricing results with the classical model that provides a closed-form solution for option pricing to meet the accuracy constraints. After establishing the accuracy of our pricing results, we further ensured that the SLA between the FSP and the investors is honoured by meeting the constraints put forth by the investor who uses the Cloud service.
18

A Novel Financial Service Model in Private Cloud

Saha, Ranjan 14 January 2014 (has links)
In this thesis, we propose architecture for a SaaS model in Cloud that would provide service to the financial investors who are not familiar with various mathematical models. Such finance models are used to evaluate financial instruments, for example, to price a derivative that is currently being traded before entering into a contact. An investor may approach CSP to price a particular derivative and specify the time, budget and accuracy constraints. Based on these constraints specified by investors, the service provider will compute the option value using our proposed FSM. To evaluate our proposed model, we compared pricing results with the classical model that provides a closed-form solution for option pricing to meet the accuracy constraints. After establishing the accuracy of our pricing results, we further ensured that the SLA between the FSP and the investors is honoured by meeting the constraints put forth by the investor who uses the Cloud service.
19

Conception par patrons des modèles à événements discrets : de la machine à états finis au DEVS / Design pattern of discrete event system : from FSM to DEVS

Messouci, Rabah 12 May 2017 (has links)
Les modèles à événements discrets sont, souvent, réalisés afin d’être simulés et par conséquent exécutés sur ordinateur.Certains codeurs de simulation optent pour une programmation impérative pour implémenter les comportements décrits par leurs machines à états.D’autres codeurs optent plutôt pour une programmation orientée objet.Ce paradigme de programmation, basé sur la notion d’objet, décline une nouvelle façon de voir un programme et son architectureToutes les solutions proposées manquent de clarté.Elles sont extrêmement coûteuses en terme de maintenabilité du code implémenté.L’utilisation exclusive des instructions conditionnelles rend toute correction au niveau du code difficile à réaliser, voire impossible dans certains cas. Aussi, elles souffrent au niveau de la réutilisabilité d’une partie du code. En effet, l’utilisation de telles instructions produit un code compacte, avec une forte cohésion entre les variables et fonctions du modèle implémenté. Par conséquent, le concepteur pourra réutiliser ce code qu’en un seul bloc.Toute dislocation du code est impossiblePour toutes ces raisons, nous proposons une nouvelle conception des modèles à événements discrets afin d’améliorer les qualités du code produit. Cette solution est basée sur le paradigme objet pour exploiter pleinement ses avantages tout en contournant ses limites. A cet effet, la solution proposée et détaillée dans cette thèse est un nouveau patron. Dans ce patron, les états et les événements sont réifiés pour avoir plus d’abstraction et de clarté. La réification permettra aux événements d’encapsuler des données et des comportements. Ainsi, nous pouvons déléguer aux événements de nouvelles responsabilités / Discrete event systems (State machine or Discrete Event system Specification) are often modeled in order to be simulated and therefore executed on a computer. Some simulation designers choose for imperative programming to implement the behaviors described by their state machines and others. Whereas, few of them choose for object-oriented programming: this paradigm of computer programming, based on the notion of object, declines a new way of seeing a program and its architecture.All of the proposed solutions found in the litterature lack clarity. They are extremely expensive in terms of debugging, reusing and changing the implemented model. The exclusive use of conditional statements if-else or switch case makes any code correction difficult to perform, even impossible in some cases. They also suffer from the reusability of some parts of the code. Indeed, the use of such instructions produces a compact code, with a strong cohesion (coupling) between the variables and functions of the implemented model. Therefore, the designer can reuse the corresponding code only in one block. Any extraction of a a piece of code which corresponds to a piece of behavior is impossible.For all these reasons, we propose a new design of discrete event systems, from the state machine to the DEVS, in order to improve the producted code qualities. This solution is based on the object paradigm to fully exploit its advantages while circumventing its limits. To this end, the solution proposed and detailed in this thesis is a new State-Event Design Pattern and its variants. Thus, the designer of simulation models will have a library of patterns to choose in order to satisfy his design requirements.
20

Uma estratégia para a minimização de máquinas de estados finitos parciais / An approach to incompletely specified finite state machine minimization

Alex Donizeti Betez Alberto 22 April 2009 (has links)
Máquinas de Estados Finitos, além de suas inúmeras aplicações, são amplamente utilizadas na Engenharia de Software para modelar especificações de sistemas. Nesses modelos, projetistas podem inserir, inadvertidamente, estados redundantes, ou seja, que exibem o mesmo comportamento. A eliminação desses estados traz diversos benefícios para as atividades que utilizam o modelo, como menor complexidade e menos recursos físicos para implementação. O processo de eliminação desses estados é denominado minimização, e pode ser realizado em tempo polinomial para máquinas completamente especificadas. Por outro lado, a minimização de máquinas parciais, cuja especificação não cobre todo o domínio de entrada, somente pode ser obtida em tempo polinomial com o uso de abordagens não determinísticas, ou seja, trata-se de um problema NP-Completo. Este trabalho apresenta uma estratégia para a minimização de máquinas de estados finitos parciais que faz o uso de heurísticas e otimizações para tornar o processo mais eficiente. Visando mensurar tal ganho de eficiência, foram realizados experimentos, nos quais os tempos de execução de uma implementação do método proposto foram medidos, juntamente com os tempos de implementações de dois outros métodos conhecidos. Os resultados mostraram vantagens significativas de performance para o novo método em relação aos métodos anteriores / Finite State Machines are largely used on Software Engineering to model systems specifications. In these models, designers may inadvertently include redundant states, i.e., states which exhibit the same input/output behavior. The absence of such states brings benefits to the modeling activities, reducing the complexity and taking less physical resources on implementations. The process of eliminating redundant states is known as minimization, and can be accomplished in polynomial time for completely specified machines. On the other hand, the minimization of partially specified machines, i.e., machines which have undefined behavior for some inputs, can only be done in polynomial time when non-deterministic approaches are applied. It is a known NP-Complete problem. This work presents a deterministic approach to minimize incompletely specified Finite State Machines, using heuristics and optimizations to accomplish the task more efficiently. In order to measure the performance improvements, experiments were done, observing the running time of an implementation of the proposed method, along with running times of implementations of two other known methods. The results revealed a significant performance advantage when using the proposed approach

Page generated in 0.0264 seconds