• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 7
  • 5
  • 4
  • 2
  • 2
  • Tagged with
  • 67
  • 67
  • 33
  • 23
  • 22
  • 19
  • 16
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Collaborative supply chain modelling and performance measurement

Angerhofer, Bernhard J. January 2002 (has links)
For many years, supply chain research focused on operational aspects and therefore mainly on the optimisation of parts of the production and distribution processes. Recently, there has been an increasing interest in supply chain management and collaboration between supply chain partners. However, there is no model that takes into consideration all aspects required to adequately represent and measure the performance of a collaborative supply chain. This thesis proposes a model of a collaborative supply chain, consisting of six constituents, all of which are required in order to provide a complete picture of such a collaborative supply chain. In conjunction with that, a collaborative supply chain performance indicator is developed. It is based on three types of measures to allow the adequate measurement of collaborative supply chain performance. The proposed model of a collaborative supply chain and the collaborative supply chain performance indicator are implemented as a computer simulation. This is done in the form of a decision support environment, whose purpose is to show how changes in any of the six constituents affect collaborative supply chain performance. The decision support environment is configured and populated with information and data obtained in a case study. Verification and validation testing in three different scenarios demonstrate that the decision support environment adequately fulfils it purpose.
2

A Software Verification & Validation Management Framework for the Space Industry

Schulte, Jan January 2009 (has links)
Software for space applications has special requirements in terms of reliability and dependability. As the verification & validation activities (VAs) of these software systems account for more than 50% of the development effort and the industry is faced with political and market pressure to deliver software faster and cheaper, new ways need to be established to reduce this verification & validation effort. In a research project together with RUAG Aerospace Sweden AB and the Swedish Space Corporation, the Blekinge Tekniska Högskola is trying to find out how to optimize the VAs with respect to effectiveness and efficiency. The goal of this thesis is therefore to develop a coherent framework for the management and optimization of verification & validation activities (VAMOS) and is evaluated at the RUAG Aerospace Sweden AB in Göteborg.
3

Systematic Review of Verification and Validation in Dynamic Programming Languages

Saeed, Farrakh, Saeed, Muhammad January 2008 (has links)
The Verification and Validation provides support to improve the quality of the software. Verification and Validation ensures that the product is stable and developed according to the requirements of the end user. This thesis presents a systematic review of dynamic programming languages and verification & validation practices used for dynamic languages. This thesis presents results found in dynamic programming languages and verification & validation over the period of 1985 – 2008. The study is aimed to start from identification of dynamic aspects along with the differences between static and dynamic languages. Furthermore, this thesis is also intends to give overview of the verification and validation practices for dynamic languages. Moreover to validate the verification and validation results, a survey consisting of (i) interviews and (ii) online survey is conducted. After the analysis of systematic review, it has been found that dynamic languages are making progress in some of the areas like integration of common development framework, language enhancement, dynamic aspects etc. The Dynamic languages are lacking in providing a better performance than static languages. There are also some factors found in this study that can raise the popularity of dynamic languages in the industry. Based on the analysis of systematic review, interviews and online survey, it is concluded that there is no difference between the methodologies available for Verification and Validation. It is also revealed that dynamic languages provide support to maintain software quality with their characteristics and dynamic features. Moreover, they also support to test softwares developed with static language. It is concluded that test driven development should be adopted while working with the dynamic languages. Test driven development is supposed to be a mandatory part of dynamic languages. / Farrakh Saeed +46765597558
4

Validace parametrů sítě založená na sledování síťového provozu / Validation of Network Parameters Based on Network Monitoring

Martínek, Radim January 2011 (has links)
The Master's Thesis presents a theoretical introduction, familiarization with the issue and a implementation for a solution of a "network parameter validation" tool, which is founded on principle of network traffic monitoring. Firstly, the current development of computer network setup is analyzed with its limitations. This is an initial point for an introduction of a new approach for implementation and verification of required network setting, which uses techniques of verification, simulation and validation. After the introduction into the context, validation techniques are specifically examined. The Thesis main contribution lies in the capacity to determine appropriate parameters, which can be used for validation and also for implementation of the tool, which ensures validation process. The network traffic, which characterizes the behavior of the network, is collected by NetFlow technology, which generates network flows. These flows are consequently used by the designed tool used for validation of required network parameters. This process overall verifies whether the main computer network requirements have been met or not.
5

The Reference Autonomous Mobility Model a Framework for Predicting Autonomous Unmanned Ground Vehicle Performance

Durst, Phillip J 03 May 2019 (has links)
Mobility modeling is a critical step in the ground vehicle acquisition process for military vehicles. Mobility modeling tools, and in particular the NATO Reference Mobility Model (NRMM), have played a critical role in understanding the mission-level capabilities of ground vehicles. This understanding via modeling supports not only developers during early vehicle design but also decision makers in the field previewing the capabilities of ground vehicles in real-world deployments. Due to decades of field testing and operations, mobility modeling for traditional ground vehicles is well-understood; however, mobility modeling tools for evaluating autonomous mobility are sparse. Therefore, this dissertation proposes and derives a Reference Autonomous Mobility Model (RAMM). The RAMM leverages cutting-edge modeling and simulation tools to build a mobility model that serves as the mission-level mobility modeling tool currently lacking in the unmanned ground vehicle (UGV) community, thereby filling the current analysis gap in the autonomous vehicle acquisition cycle. The RAMM is built on (1) a thorough review of theories of verification and validation of simulations, (2) a novel framework for validating simulations of autonomous systems and (3) the mobility modeling framework already established by the NRMM. These building blocks brought to light the need for new, validated modeling and simulation (M&S) tools capable of simulating, at a highidelity, autonomous unmanned ground vehicle operations. This dissertation maps the derivation of the RAMM, starting with a history of verification of simulation models and a literature review of current autonomous mobility modeling methods. In light of these literature reviews, a new framework for V&V of simulations of autonomous systems is proposed, and the requirements for and derivation of the RAMM is presented. This dissertation concludes with an example application of the RAMM for route planning for autonomous UGVs. Once fully developed, the RAMM will serve as an integral part in the design, development, testing and evaluation, and ultimate fielding of autonomous UGVs for military applications.
6

Structural Dynamics Model Calibration and Validation of a Rectangular Steel Plate Structure

Kohli, Karan 24 October 2014 (has links)
No description available.
7

A Virtual pilot algorithm for synthetic HUMS data generation

Fowler, Lee Everett 07 January 2016 (has links)
Regime recognition is an important tool used in creation of usage spectra and fatigue loads analysis. While a variety of regime recognition algorithms have been developed and deployed to date, verification and validation (V&V) of such algorithms is still a labor intensive process that is largely subjective. The current V&V process for regime recognition codes involves a comparison of scripted flight test data to regime recognition algorithm outputs. This is problematic because scripted flight test data is expensive to obtain, may not accurately match the maneuver script, and is often used to train the regime recognition algorithms and thus is not appropriate for V&V purposes. In this paper, a simulation-based virtual pilot algorithm is proposed as an alternative to physical testing for generating V&V flight test data. A “virtual pilot” is an algorithm that replicates a human’s piloting and guidance role in simulation by translating high level maneuver instructions into parameterized control laws. Each maneuver regime is associated with a feedback control law, and a control architecture is defined which provides for seamless transitions between maneuvers and allows for execution of an arbitrary maneuver script in simulation. The proposed algorithm does not require training data, iterative learning, or optimization, but rather utilizes a tuned model and feedback control laws defined for each maneuver. As a result, synthetic HUMS data may be generated and used in a highly automated regime recognition V&V process. In this thesis, the virtual pilot algorithm is formulated and the component feedback control laws and maneuver transition schemes are defined. Example synthetic HUMS data is generated using a simulation model of the SH-60B, and virtual pilot fidelity is demonstrated through both conformance to the ADS-33 standards for selected Mission Task Elements and comparison to actual HUMS data.
8

Credible autocoding of control software

Wang, Timothy 21 September 2015 (has links)
Formal methods is a discipline of using a collection of mathematical techniques and formalisms to model and analyze software systems. Motivated by the new formal methods-based certification recommendations for safety-critical embedded software and the significant increase in the cost of verification and validation (V\&V), this research is about creating a software development process for control systems that can provide mathematical guarantees of high-level functional properties on the code. The process, dubbed credible autocoding, leverages control theory in the automatic generation of control software documented with proofs of their stability and performance. The main output of this research is an automated, credible autocoding prototype that transforms the Simulink model of the controller into C code documented with a code-level proof of the stability of the controller. The code-level proof, expressed using a formal specification language, are embedded into the code as annotations. The annotations guarantee that the auto-generated code conforms to the input model to the extent that key properties are satisfied. They also provide sufficient information to enable an independent, automatic, formal verification of the auto-generated controller software.
9

Closing the building energy performance gap by improving our predictions

Sun, Yuming 27 August 2014 (has links)
Increasing studies imply that predicted energy performance of buildings significantly deviates from actual measured energy use. This so-called "performance gap" may undermine one's confidence in energy-efficient buildings, and thereby the role of building energy efficiency in the national carbon reduction plan. Closing the performance gap becomes a daunting challenge for the involved professions, stimulating them to reflect on how to investigate and better understand the size, origins, and extent of the gap. The energy performance gap underlines the lack of prediction capability of current building energy models. Specifically, existing predictions are predominantly deterministic, providing point estimation over the future quantity or event of interest. It, thus, largely ignores the error and noise inherent in an uncertain future of building energy consumption. To overcome this, the thesis turns to a thriving area in engineering statistics that focuses on computation-based uncertainty quantification. The work provides theories and models that enable probabilistic prediction over future energy consumption, forming the basis of risk assessment in decision-making. Uncertainties that affect the wide variety of interacting systems in buildings are organized into five scales (meteorology - urban - building - systems - occupants). At each level both model form and input parameter uncertainty are characterized with probability, involving statistical modeling and parameter distributional analysis. The quantification of uncertainty at different system scales is accomplished using the network of collaborators established through an NSF-funded research project. The bottom-up uncertainty quantification approach, which deals with meta uncertainty, is fundamental for generic application of uncertainty analysis across different types of buildings, under different urban climate conditions, and in different usage scenarios. Probabilistic predictions are evaluated by two criteria: coverage and sharpness. The goal of probabilistic prediction is to maximize the sharpness of the predictive distributions subject to the coverage of the realized values. The method is evaluated on a set of buildings on the Georgia Tech campus. The energy consumption of each building is monitored in most cases by a collection of hourly sub-metered consumption data. This research shows that a good match of probabilistic predictions and the real building energy consumption in operation is achievable. Results from the six case buildings show that using the best point estimations of the probabilistic predictions reduces the mean absolute error (MAE) from 44% to 15% and the root mean squared error (RMSE) from 49% to 18% in total annual cooling energy consumption. As for monthly cooling energy consumption, the MAE decreases from 44% to 21% and the RMSE decreases from 53% to 28%. More importantly, the entire probability distributions are statistically verified at annual level of building energy predictions. Based on uncertainty and sensitivity analysis applied to these buildings, the thesis concludes that the proposed method significantly reduces the magnitude and effectively infers the origins of the building energy performance gap.
10

Towards Optimization of Software V&V Activities in the Space Industry [Two Industrial Case Studies] / Mot Optimering av Software V & V Aktiviteter i rymdindustrins [Två Industriella Fallstudier]

Ahmad, Ehsan, Raza, Bilal January 2009 (has links)
Developing software for high-dependable space applications and systems is a formidable task. With new political and market pressures on the space industry to deliver more software at a lower cost, optimization of their methods and standards need to be investigated. The industry has to follow standards that strictly sets quality goals and prescribes engineering processes and methods to fulfill them. The overall goal of this study is to evaluate if current use of ECSS standards is cost efficient and if there are ways to make the process leaner while still maintaining the quality and to analyze if their V&V activities can be optimized. This paper presents results from two industrial case studies of companies in the European space industry that are following ECSS standards and have various V&V activities. The case studies reported here focused on how the ECSS standards were used by the companies and how that affected their processes and how their V&V activities can be optimized. / Utveckling av programvara för hög funktionssäkra rymden applikationer och system är en formidabel uppgift. Med nya politiska och marknadsmässiga trycket på rymdindustrin att leverera mer mjukvara till en lägre kostnad, optimering av deras metoder och standarder måste utredas. Industrin har att följa standarder som absolut uppsättningar kvalitetsmål och föreskriver tekniska processer och metoder för att uppfylla dem. Det övergripande målet för denna studie är att utvärdera om den nuvarande användningen av ECSS standarder är kostnaden effektivt och om det finns sätt att göra processen smidigare och samtidigt bibehålla kvaliteten och för att analysera om V & V verksamhet kan optimeras. Detta dokument presenterar resultat från två industriella fallstudier av företag inom den europeiska rymdindustrin som är Följande ECSS krav och ha olika V & V verksamhet. Fallstudierna redovisas här fokuserat på hur ECSS standarder som används av företag och hur detta påverkat deras processer och hur deras V & V verksamhet kan optimeras.

Page generated in 0.1423 seconds