• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Traffic and performance evaluation for optical networks. An Investigation into Modelling and Characterisation of Traffic Flows and Performance Analysis and Engineering for Optical Network Architectures.

Mouchos, Charalampos January 2009 (has links)
The convergence of multiservice heterogeneous networks and ever increasing Internet applications, like peer to peer networking and the increased number of users and services, demand a more efficient bandwidth allocation in optical networks. In this context, new architectures and protocols are needed in conjuction with cost effective quantitative methodologies in order to provide an insight into the performance aspects of the next and future generation Internets. This thesis reports an investigation, based on efficient simulation methodologies, in order to assess existing high performance algorithms and to propose new ones. The analysis of the traffic characteristics of an OC-192 link (9953.28 Mbps) is initially conducted, a requirement due to the discovery of self-similar long-range dependent properties in network traffic, and the suitability of the GE distribution for modelling interarrival times of bursty traffic in short time scales is presented. Consequently, using a heuristic approach, the self-similar properties of the GE/G/¿ are being presented, providing a method to generate self-similar traffic that takes into consideration burstiness in small time scales. A description of the state of the art in optical networking providing a deeper insight into the current technologies, protocols and architectures in the field, which creates the motivation for more research into the promising switching technique of ¿Optical Burst Switching¿ (OBS). An investigation into the performance impact of various burst assembly strategies on an OBS edge node¿s mean buffer length is conducted. Realistic traffic characteristics are considered based on the analysis of the OC-192 backbone traffic traces. In addition the effect of burstiness in the small time scales on mean assembly time and burst size distribution is investigated. A new Dynamic OBS Offset Allocation Protocol is devised and favourable comparisons are carried out between the proposed OBS protocol and the Just Enough Time (JET) protocol, in terms of mean queue length, blocking and throughput. Finally the research focuses on simulation methodologies employed throughout the thesis using the Graphics Processing Unit (GPU) on a commercial NVidia GeForce 8800 GTX, which was initially designed for gaming computers. Parallel generators of Optical Bursts are implemented and simulated in ¿Compute Unified Device Architecture¿ (CUDA) and compared with simulations run on general-purpose CPU proving the GPU to be a cost-effective platform which can significantly speed-up calculations in order to make simulations of more complex and demanding networks easier to develop.
12

Predicted behaviour of the AGN 201 reactor at high power levels

Cooke, William B. H. January 1961 (has links) (PDF)
Thesis (M.S. in Mechanical Engineering)--Naval Postgraduate School, March 2010. / Thesis Advisor(s): Handle, Harry E. "January 1961." Description based on title screen as viewed on June 2, 2010. DTIC Descriptor(s): (Nuclear Reactors, Performance (Engineering)), Mathematical Analysis, Radioactive Isotopes, Heat Transfer, Kinetic Energy, Digital Computers, Nuclear Energy, Equations, Temperature, Neutron Flux, Nuclear Reactions. DTIC Identifier(s): AGN-201 Reactors. Includes bibliographical references (p. 62). Also available in print.
13

Avaliação de desempenho de processos de testes de software

Luiz Monteiro Marinho, Marcelo 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:56:43Z (GMT). No. of bitstreams: 2 arquivo2981_1.pdf: 3215822 bytes, checksum: 207b7ae6534fc85e3430543b64c88b38 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / A procura por softwares com maior qualidade tem motivado a definição de métodos e técnicas para o desenvolvimento de softwares que atinjam os padrões de qualidade impostos. Com isso, o interesse pela atividade de teste de software vem aumentando nos últimos anos. As fábricas de software enfrentam dificuldades na elaboração de processos de testes adequados ao projeto de maneira que sejam efetivos com relação à qualidade do produto e, ao mesmo tempo, tenham execução eficiente. Esses aspectos concorrentes podem afetar os níveis de qualidade almejado ou induzir o desenvolvimento de processos rebuscados e ineficientes. Parte desse problema ocorre tanto devido à dificuldade enfrentada pelas organizações na definição de processos a cada projeto particular, quanto pela ausência de mecanismos que possibilitem a provisão de meios para escolha das alternativas mais convenientes a cada projeto particular em termos de desempenho e critérios de qualidade. Dessa forma, ambientes que proporcionem a avaliação do desempenho dos processos e que possibilitem estimativa do uso de recursos são mecanismos que concorrem para melhoria dos índices de qualidade e produtividade das organizações. Modelos de execução de processo voltados para estimativa de desempenho que levem em consideração combinações de cenários diversos e ativos podem trazer ganhos substanciais de produtividade tanto na customização dos processos quanto na efetividade do processo definido para o projeto. Este trabalho propõe uma metodologia de avaliação de desempenho aplicada a Processos de Testes de Software. Com a aplicação da metodologia proposta, é possível verificar o impacto de mudanças no processo, avaliar o desempenho do processo de testes, realizar simulações com o objetivo de obter estimativas mais precisas e, principalmente, ajudar na garantia da qualidade do produto. Além disso, essa metodologia possibilitará a avaliação de diferentes alternativas de implementações, bem como a verificação de melhor composição de recursos pessoais para as atividades do processo. Isso tudo pode ser realizado sem a necessidade da real implementação do processo, tornando mais ágil e barato todo o processo
14

Extending Peass to Detect Performance Changes of Apache Tomcat

Rosenlund, Stefan 07 August 2023 (has links)
New application versions may contain source code changes that decrease the application’s performance. To ensure sufficient performance, it is necessary to identify these code changes. Peass is a performance analysis tool using performance measurements of unit tests to achieve that goal for Java applications. However, it can only be utilized for Java applications that are built using the tools Apache Maven or Gradle. This thesis provides a plugin for Peass that enables it to analyze applications built with Apache Ant. Peass utilizes the frameworks Kieker and KoPeMe to record the execution traces and measure the response times of unit tests. This results in the following tasks for the Peass-Ant plugin: (1) Add Kieker and KoPeMe as dependencies and (2) Execute transformed unit tests. For the first task, our plugin programmatically resolves the transitive dependencies of Kieker and KoPeMe and modifies the XML buildfiles of the application under test. For the second task, the plugin orchestrates the process that surrounds test execution—implementing performance optimizations for the analysis of applications with large codebases—and executes specific Ant commands that prepare and start test execution. To make our plugin work, we additionally improved Peass and Kieker. Therefore, we implemented three enhancements and identified twelve bugs. We evaluated the Peass-Ant plugin by conducting a case study on 200 commits of the open-source project Apache Tomcat. We detected 14 commits with 57 unit tests that contain performance changes. Our subsequent root cause analysis identified nine source code changes that we assigned to three clusters of source code changes known to cause performance changes.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work / Neue Versionen einer Applikation können Quelltextänderungen enthalten, die die Performance der Applikation verschlechtern. Um eine ausreichende Performance sicherzustellen, ist es notwendig, diese Quelltextänderungen zu identifizieren. Peass ist ein Performance-Analyse-Tool, das die Performance von Unit-Tests misst, um dieses Ziel für Java-Applikationen zu erreichen. Allerdings kann es nur für Java-Applikationen verwendet werden, die eines der Build-Tools Apache Maven oder Gradle nutzen. In dieser Arbeit wird ein Plugin für Peass entwickelt, das es ermöglicht, mit Peass Applikationen zu analysieren, die das Build-Tool Apache Ant nutzen. Peass verwendet die Frameworks Kieker und KoPeMe, um Ausführungs-Traces von Unit-Tests aufzuzeichnen und Antwortzeiten von Unit-Tests zu messen. Daraus resultieren folgende Aufgaben für das Peass-Ant-Plugin: (1) Kieker und KoPeMe als Abhängigkeiten hinzufügen und (2) Transformierte Unit-Tests ausführen. Für die erste Aufgabe löst das Plugin programmbasiert die transitiven Abhängigkeiten von Kieker und KoPeMe auf und modifiziert die XML-Build-Dateien der zu testenden Applikation. Für die zweite Aufgabe steuert das Plugin den Prozess, der die Testausführung umgibt, und führt spezielle Ant-Kommandos aus, die die Testausführung vorbereiten und starten. Dabei implementiert es Performanceoptimierungen, um auch Applikationen mit einer großen Codebasis analysieren zu können. Um die Lauffähigkeit des Plugins sicherzustellen, wurden zusätzlich Verbesserungen an Peass und Kieker vorgenommen. Dabei wurden drei Erweiterungen implementiert und zwölf Bugs identifiziert. Um das Peass-Ant-Plugin zu bewerten, wurde eine Fallstudie mit 200 Commits des Open-Source-Projekts Apache Tomcat durchgeführt. Dabei wurden 14 Commits mit 57 Unit-Tests erkannt, die Performanceänderungen enthalten. Unsere anschließende Ursachenanalyse identifizierte neun verursachende Quelltextänderungen. Diese wurden drei Clustern von Quelltextänderungen zugeordnet, von denen bekannt ist, dass sie eine Veränderung der Performance verursachen.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work
15

An F/2 Focal Reducer For The 60-Inch U.S. Naval Observatory Telescope

Meinel, Aden B., Wilkerson, Gary W. 28 February 1968 (has links)
QC 351 A7 no. 07 / The Meinel Reducing Camera for the U. S. Naval Observatory's 60-inch telescope, Flagstaff, Arizona, comprises an f /10 collimator designed by Meinel and Wilkerson, and a Leica 50-mm f/2 Summicron camera lens. The collimator consists of a thick, 5-inch field lens located close to the focal plane of the telescope, plus four additional elements extending toward the camera. The collimator has an efl of 10 inches, yielding a 1-inch exit pupil that coincides with the camera's entrance pupil, 1.558 inches beyond the final surface of the collimator. There is room between the facing lenses of the collimator and camera to place filters and a grating. The collimated light here is the best possible situation for interference filters. Problems of the collimator design work included astigmatism due to the stop's being so far outside the collimator, and field curvature. Two computer programs were used in development of the collimator design. Initial work, begun in 1964, was with the University of Rochester's ORDEALS program (this was the first time the authors had used such a program) and was continued through July, 1965. Development subsequently was continued and completed on the Los Alamos Scientific Laboratory's program, LASL. The final design, completed January 24, 1966, was evaluated with ORDEALS. This project gave a good opportunity to compare ORDEALS, an "aberration" program, with LASL, a "ray deviation" program. It was felt that LASL was the superior program in this case, and some experimental runs beginning with flat slabs of glass indicated that it could have been used for the entire development of the collimator. Calculated optical performance of the design indicated that the reducing camera should be "seeing limited" for most work. Some astigmatism was apparent, but the amount did not turn out to be harmful in actual astronomical use. After the final design was arrived at, minor changes were made to accommodate actual glass indices of the final melt, and later to accommodate slight changes of radii and thicknesses of the elements as fabricated. An additional small change in spacing between two of the elements was made at the observatory after the reducing camera had been in use for a short time. The fabricated camera is working according to expectations. Some photographs are included in the report to illustrate its performance and utility.
16

Performance Engineering of Software Web Services and Distributed Software Systems

Lin, Chia-en 05 1900 (has links)
The promise of service oriented computing, and the availability of Web services promote the delivery and creation of new services based on existing services, in order to meet new demands and new markets. As Web and internet based services move into Clouds, inter-dependency of services and their complexity will increase substantially. There are standards and frameworks for specifying and composing Web Services based on functional properties. However, mechanisms to individually address non-functional properties of services and their compositions have not been well established. Furthermore, the Cloud ontology depicts service layers from a high-level, such as Application and Software, to a low-level, such as Infrastructure and Platform. Each component that resides in one layer can be useful to another layer as a service. It hints at the amount of complexity resulting from not only horizontal but also vertical integrations in building and deploying a composite service. To meet the requirements and facilitate using Web services, we first propose a WSDL extension to permit specification of non-functional or Quality of Service (QoS) properties. On top of the foundation, the QoS-aware framework is established to adapt publicly available tools for Web services, augmented by ontology management tools, along with tools for performance modeling to exemplify how the non-functional properties such as response time, throughput, or utilization of services can be addressed in the service acquisition and composition process. To facilitate Web service composition standards, in this work we extended the framework with additional qualitative information to the service descriptions using Business Process Execution Language (BPEL). Engineers can use BPEL to explore design options, and have the QoS properties analyzed for the composite service. The main issue in our research is performance evaluation in software system and engineering. We researched the Web service computation as the first half of this dissertation, and performance antipattern detection and elimination in the second part. Performance analysis of software system is complex due to large number of components and the interactions among them. Without the knowledge of experienced experts, it is difficult to diagnose performance anomalies and attempt to pinpoint the root causes of the problems. Software performance antipatterns are similar to design patterns in that they provide what to avoid and how to fix performance problems when they appear. Although the idea of applying antipatterns is promising, there are gaps in matching the symptoms and generating feedback solution for redesign. In this work, we analyze performance antipatterns to extract detectable features, influential factors, and resource involvements so that we can lay the foundation to detect their presence. We propose system abstract layering model and suggestive profiling methods for performance antipattern detection and elimination. Solutions proposed can be used during the refactoring phase, and can be included in the software development life cycle. Proposed tools and utilities are implemented and their use is demonstrated with RUBiS benchmark.
17

The Effects of consolidating F-16 phase and cannibalization aircraft on key maintenance indicators

Powell, Matthew J. January 2007 (has links)
Thesis (M. of Military Art and Science)--U.S. Army Command and General Staff College, 2007. / The original document contains color images. Title from title page of PDF document (viewed on May 27, 2008). Includes bibliographic references.
18

Data Transformation Trajectories in Embedded Systems

Kasinathan, Gokulnath January 2016 (has links)
Mobile phone tracking is the ascertaining of the position or location of a mobile phone when moving from one place to another place. Location Based Services Solutions include Mobile positioning system that can be used for a wide array of consumer-demand services like search, mapping, navigation, road transport traffic management and emergency-call positioning. The Mobile Positioning System (MPS) supports complementary positioning methods for 2G, 3G and 4G/LTE (Long Term Evolution) networks. Mobile phone is popularly known as an UE (User Equipment) in LTE. A prototype method of live trajectory estimation for massive UE in LTE network has been proposed in this thesis work. RSRP (Reference Signal Received Power) values and TA(Timing Advance) values are part of LTE events for UE. These specific LTE events can be streamed to a system from eNodeB of LTE in real time by activating measurements on UEs in the network. AoA (Angle of Arrival) and TA values are used to estimate the UE position. AoA calculation is performed using RSRP values. The calculated UE positions are filtered using Particle Filter(PF) to estimate trajectory. To obtain live trajectory estimation for massive UEs, the LTE event streamer is modelled to produce several task units with events data for massive UEs. The task level modelled data structures are scheduled across Arm Cortex A15 based MPcore, with multiple threads. Finally, with massive UE live trajectory estimation, IMSI (International mobile subscriber identity) is used to maintain hidden markov requirements of particle filter functionality while maintaining load balance for 4 Arm A15 cores. This is proved by serial and parallel performance engineering. Future work is proposed for Decentralized task level scheduling with hash function for IMSI with extension of cores and Concentric circles method for AoA accuracy. / Mobiltelefoners positionering är välfungerande för positionslokalisering av mobiltelefoner när de rör sig från en plats till en annan. Lokaliseringstjänsterna inkluderar mobil positionering system som kan användas till en mängd olika kundbehovs tjänster som sökning av position, position i kartor, navigering, vägtransporters trafik managering och nödsituationssamtal med positionering. Mobil positions system (MPS) stödjer komplementär positions metoder för 2G, 3G och 4G/LTE (Long Term Evolution) nätverk. Mobiltelefoner är populärt känd som UE (User Equipment) inom LTE. En prototypmetod med verkliga rörelsers estimering för massiv UE i LTE nätverk har blivit föreslagen för detta examens arbete. RSRP (Reference Signal Received Power) värden och TA (Timing Advance) värden är del av LTE händelser för UE. Dessa specifika LTE event kan strömmas till ett system från eNodeB del av LTE, i realtid genom aktivering av mätningar på UEar i nätverk. AoA (Angel of Arrival) och TA värden är använt för att beräkna UEs position. AoA beräkningar är genomförda genom användandet av RSRP värden. Den kalkylerade UE positionen är filtrerad genom användande av Particle Filter (PF) för att estimera rörelsen. För att identifiera verkliga rörelser, beräkningar för massiva UEs, LTE event streamer är modulerad att producera flera uppgifts enheter med event data från massiva UEar. De tasks modulerade data strukturerna är planerade över Arm Cortex A15 baserade MPcore, med multipla trådar. Slutligen, med massiva UE verkliga rörelser, beräkningar med IMSI(International mobile subscriber identity) är använt av den Hidden Markov kraven i Particle Filter’s funktionalitet medans kravet att underhålla last balansen för 4 Arm A15 kärnor. Detta är utfört genom seriell och parallell prestanda teknik. Framtida arbeten för decentraliserade task nivå skedulering med hash funktion för IMSI med utökning av kärnor och Concentric circles metod för AoA noggrannhet.
19

Modélisation sémantique conceptuelle pour l'ingénierie de performances comportementales de produits complexes / Conceptual semantic modeling for complex product behavioral performance engineering

Diagne, Serigne 07 July 2015 (has links)
La complexification des produits manufacturés notamment mécatroniques requiert la mise en place d’outils et méthodes pour la gestion de leur processus de conception. Ce processus va du cahier des charges à l’obtention de prototypes satisfaisant les exigences structurelles, fonctionnelles et comportementales. Pour développer des produits performants, sûrs de fonctionnement et à moindre coût tout en respectant les délais ce processus doit être maitrisé. Les travaux menés durant cette thèse ont pour objectif de proposer une démarche générique de conception et de modélisation de produits mécatroniques tout en permettant l’évaluation de leurs performances comportementales. La démarche proposée couvre tout le processus allant de la spécification des besoins à l’identification et l’élaboration de prototypes numériques des produits répondant à ces exigences. Elle est basée essentiellement sur trois étapes successives que sont la conception sémantique conceptuelle (CSC), la modélisation sémantique conceptuelle (MSC) et l’ingénierie de performances comportementales (IPC). Ces contributions théoriques sont ensuite implémentées dans un logiciel nommé Product-BPAS et développé dans cette thèse. / The increasing complexity of manufactured product such mechatronics products requires tools and methods to manage their design process. This process covers the steps going from the requirements specification to the definition of the digital mockup that fulfils the structural, functional and behavioral requirements. To develop high quality products with good performance and low cost while respecting delay this process must be optimized and mastered. The research conducted during this thesis is directed to propose a generic approach for mechatronics products design and behavioural performance assessment. This approach covers the process going from the specification of the requirements to the identification and the design of the digital mockups of the products that meet those requirements. This approach is essentially based on three successive steps that are conceptual semantic design (CSD), conceptual semantic modeling (CSM) and behavioral performance engineering (BPE). These theoretical contributions have been implemented in the Product- BPAS software for test and illustrate purposes.

Page generated in 0.1425 seconds