• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 7
  • 5
  • 4
  • 2
  • 2
  • Tagged with
  • 66
  • 66
  • 32
  • 22
  • 22
  • 18
  • 16
  • 13
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Verification and Validation of Object Oriented Software Design : Guidelines on how to Choose the Best Method / Verifiering och Validering av Objekt-Orienterad Mjukvarudesign : Guidelines hur man väljer rätt method

Thurn, Christian January 2004 (has links)
The earlier in the development process a fault is found, the cheaper it is to correct the fault. Therefore are verification and validation methods important tools. The problem with this is that there are many methods to choose between. This thesis sheds light on how to choose between four common verification and validation methods. The verification and validation methods presented in this thesis are reviews, inspections and Fault Tree Analysis. Review and inspection methods are evaluated in an empirical study. The result of the study shows that there are differences in terms of defect detection. Based on this study and literature study, guidelines on how to choose the best method in a given context are given. / Desto tidigare i utvecklingsprocessen som ett fel hittas, desto billigare är det att rätt till detta fel. Därför är verifierings- och valideringsmetoder viktiga verktyg. Problemet är att det finns många metoder. Den här rapporten sprider ljus över hur man ska välja mellan fyra vanliga verifierings- och valideringsmetoder. Verifierings- och valideringsmetoderna i den här rapporten är granskningar, inspektioner och "Fault Tree Analysis". Granskningar och inspektioner är utvärderade i en empiriskt studie. Resultatet av studien visar att det finns skillnader mellan metoderna när det gäller att hitta fel.
12

Appropriate Web Usability Evaluation Method during Product Development

Umar, Azeem, Tatari, Kamran Khan January 2008 (has links)
Web development is different from traditional software development. Like in all software applications, usability is one of the core components of web applications. Usability engineering and web engineering are rapidly growing fields. Companies can improve their market position by making their products and services more accessible through usability engineering. User testing is often skipped when approaching deadline. This is very much true in case of web application development. Achieving good usability is one of the main concerns of web development. Several methods have been proposed in literature for evaluating web usability. There is not yet an agreement in the software development community about which usability evaluation method is more useful than another. Doing extensive usability evaluation is usually not feasible in case of web development. On the other hand unusable website increases the total cost of ownership. Improved usability is one of the major factors in achieving satisfaction up to a sufficient level. It can be achieved by utilizing appropriate usability evaluation method, but cost-effective usability evaluation tools are still lacking. In this thesis we study usability inspection and usability testing methods. Furthermore, an effort has been made in order to find appropriate usability evaluation method for web applications during product development and in this effort we propose appropriate web usability evaluation method which is based on observation of the common opinion of web industry. / There is no standard framework or mechanism of selecting usability evaluation method for software development. In the context of web development projects where time and budget are more limited than traditional software development projects, it becomes even harder to select appropriate usability evaluation method. Certainly it is not feasible for any web development project to utilize multiple usability inspection method and multiple usability testing methods during product development. The good choice can be the combinational method composed of one usability inspection method and one usability testing method. The thesis has contributed by identifying those usability evaluation methods which are common in literature and current web industry / ifazeem@gmail.com
13

Using Formal Methods to Build and Validate Reliable and Secure Smart Systems via TLA+

Obeidat, Nawar H. 05 October 2021 (has links)
No description available.
14

Application of CFD to Safety and Thermal-Hydraulic Analysis of Lead-Cooled Systems

Jeltsov, Marti January 2011 (has links)
Computational Fluid Dynamics (CFD) is increasingly being used in nuclear reactor safety analysis as a tool that enables safety related physical phenomena occurring in the reactor coolant system to be described in more detail and accuracy. Validation is a necessary step in improving predictive capability of a computationa code or coupled computational codes. Validation refers to the assessment of model accuracy incorporating any uncertainties (aleatory and epistemic) that may be of importance. The uncertainties must be identi ed, quanti ed and if possible, reduced. In the rst part of this thesis, a discussion on the development of an approach and experimental facility for the validation of coupled Computational Fluid Dynamics codes and System Thermal Hydraulics (STH) codes is given. The validation of a coupled code requires experiments which feature signi cant two-way feedbacks between the component (CFD sub-domain) and the system (STH sub-domain). Results of CFD analysis that are used in the development of a exible design of the TALL-3D experimental facility are presented. The facility consists of a lead-bismuth eutectic (LBE) thermal-hydraulic loop operating in forced and natural circulation regimes with a heated pool-type 3D test section. Transient analysis of the mixing and strati cation phenomena in the 3D test section under forced and natural circulation conditions in the loop show that the test section outlet temperature deviates from that predicted by analytical solution (which the 1D STH solution essentially is). Also an experimental validation test matrix according to the key physical phenomena of interest in the new experimental facility is developed. In the second part of the thesis we consider the risk related to steam generator tube leakage or rupture (SGTL/R) in a pool-type design of lead-cooled reactor (LFR). We demonstrate that there is a possibility that small steam bubbles leaking from the SGT will be dragged by the turbulent coolant ow into the core region. Voiding of the core might cause threats of reactivity insertion accident or local damage (burnout) of fuel rod cladding. Trajectories of the bubbles are determined by the bubble size and turbulent ow eld of lead coolant. The main objective of such study is to quantify likelihood of steam bubble transport to the core region in case of SGT leakage in the primary coolant system of the ELSY (European Lead-cooled SYstem) design. Coolant ow eld and bubble motion are simulated by CFD code Star-CCM+. First, we discuss drag correlations for a steam bubble moving in liquid lead. Thereafter the steady state liquid lead ow eld in the primary system is modeled according to the ELSY design parameters of nominal full power operation. Finally, the consequences of SGT leakage are modeled by injecting bubbles in the steam generator region. An assessment of the probability that bubbles can reach the core region and also accumulate in the primary system, is performed. The most dangerous leakage positions in the SG and bubble sizes are identi ed. Possible design solutions for prevention of core voiding in case of SGTL/R are discussed.
15

ENABLING REAL TIME INSTRUMENTATION USING RESERVOIR SAMPLING AND BIN PACKING

Sai Pavan Kumar Meruga (16496823) 30 August 2023 (has links)
<p><em>Software Instrumentation is the process of collecting data during an application’s runtime,</em></p> <p><em>which will help us debug, detect errors and optimize the performance of the binary. The</em></p> <p><em>recent increase in demand for low latency and high throughput systems has introduced new</em></p> <p><em>challenges to the process of Software Instrumentation. Software Instrumentation, especially</em></p> <p><em>dynamic, has a huge impact on systems performance in scenarios where there is no early</em></p> <p><em>knowledge of data to be collected. Naive approaches collect too much or too little</em></p> <p><em>data, negatively impacting the system’s performance.</em></p> <p><em>This thesis investigates the overhead added by reservoir sampling algorithms at different</em></p> <p><em>levels of granularity in real-time instrumentation of distributed software systems. Also, this thesis describes the implementation of sampling techniques and algorithms to reduce the overhead caused by instrumentation.</em></p>
16

Verification and validation of knowledge-based clinical decision support systems - a practical approach : A descriptive case study at Cambio CDS / Verifiering och validering av kunskapbaserade kliniska beslutstödssystem - ett praktiskt tllvägagångssätt : En beskrivande fallstudie hos Cambio CDS

De Sousa Barroca, José Duarte January 2021 (has links)
The use of clinical decision support (CDS) systems has grown progressively during the past decades. CDS systems are associated with improved patient safety and outcomes, better prescription and diagnosing practices by clinicians and lower healthcare costs. Quality assurance of these systems is critical, given the potentially severe consequences of any errors. Yet, after several decades of research, there is still no consensual or standardized approach to their verification and validation (V&amp;V). This project is a descriptive and exploratory case study aiming to provide a practical description of how Cambio CDS, a market-leading developer of CDS services, conducts its V&amp;V process. Qualitative methods including semi-structured interviews and coding-based textual data analysis were used to elicit the description of the V&amp;V approaches used by the company. The results showed that the company’s V&amp;V methodology is strongly influenced by the company’s model-driven development approach, a strong focus and leveraging of domain knowledge and good testing practices with a focus on automation and test-driven development. A few suggestions for future directions were discussed.
17

Parameterized Verification and Synthesis for Distributed Agreement-Based Systems

Nouraldin Jaber (13796296) 19 September 2022 (has links)
<p> </p> <p>Distributed agreement-based systems use common distributed agreement protocols such as leader election and consensus as building blocks for their target functionality—processes in these systems may need to agree on a leader, on the members of a group, on owners of locks, or on updates to replicated data. Such distributed agreement-based systems are common and potentially permit modular, scalable verification approaches that mimic their modular design. Interestingly, while there are many verification efforts that target agreement protocols themselves, little attention has been given to distributed agreement-based systems that build on top of these protocols. </p> <p>In this work, we aim to develop a fully-automated, modular, and usable parameterized verification approach for distributed agreement-based systems. To do so, we need to overcome the following challenges. First, the fully automated parameterized verification problem, i.e, the problem of algorithmically checking if the system is correct for any number of processes, is a well-known <em>undecidable </em>problem. Second, to enable modular verification that leverages the inherently-modular nature of these agreement-based systems, we need to be able to support <em>abstractions </em>of agreement protocols. Such abstractions can replace the agreement protocols’ implementations when verifying the overall system; enabling modular reasoning. Finally, even when the verification is fully automated, a system designer still needs assistance in <em>modeling </em>their distributed agreement-based systems. </p> <p>We systematically tackle these challenges through the following contributions. </p> <p>First, we support efficient, decidable verification of distributed agreement-based systems by developing a computational model—the GSP model—for reasoning about distributed (agreement-based) systems that admits decidability and <em>cutoff </em>results. Cutoff results enable practical verification by reducing the parameterized verification problem to the verification problem of a system with a fixed, finite number of processes. The GSP model supports generalized communication primitives and global guards, both of which are essential to enable abstractions of agreement protocols. </p> <p>Then, we address the usability and modularity aspects by developing a framework, QuickSilver, tailored for modeling and modular parameterized verification of distributed agreement-based systems. QuickSilver provides an intuitive domain-specific language, called Mercury, that is equipped with two agreement primitives capable of abstracting away agreement protocols when modeling agreement-based systems; enabling modular verification. QuickSilver extends the decidability and cutoff results of the GSP model to provide fully automated, efficient parameterized verification for a large class of systems modeled in Mercury. </p> <p>Finally, we leverage synthesis techniques to further enhance the usability of our approach and propose Cinnabar, a tool that supports synthesis of distributed agreement-based systems with efficiently-decidable parameterized verification. Cinnabar allows a system de- signer to provide a sketch of their Mercury model and uses a counterexample-guided synthesis procedure to search for model completions that both belong to the efficiently-decidable fragment of Mercury and are correct. </p> <p>We evaluate our contributions on various interesting distributed agreement-based systems adapted from real-world applications, such as a data store, a lock service, a surveillance system, a pathfinding algorithm for mobile robots, and more. </p>
18

Model Composition and Aggregation in Macromolecular Regulatory Networks

Randhawa, Ranjit 14 May 2008 (has links)
Mathematical models of regulatory networks become more difficult to construct and understand as they grow in size and complexity. Large regulatory network models can be built up from smaller models, representing subsets of reactions within the larger network. This dissertation focuses on novel model construction techniques that extend the ability of biological modelers to construct larger models by supplying them with tools for decomposing models and using the resulting components to construct larger models. Over the last 20 years, molecular biologists have amassed a great deal of information about the genes and proteins that carry out fundamental biological processes within living cells --- processes such as growth and reproduction, movement, signal reception and response, and programmed cell death. The full complexity of these macromolecular regulatory networks is too great to tackle mathematically at the present time. Nonetheless, modelers have had success building dynamical models of restricted parts of the network. Systems biologists need tools now to support composing "submodels" into more comprehensive models of integrated regulatory networks. We have identified and developed four novel processes (fusion, composition, flattening, and aggregation) whose purpose is to support the construction of larger models. Model Fusion combines two or more models in an irreversible manner. In fusion, the identities of the original (sub)models are lost. Beyond some size, fused models will become too complex to grasp and manage as single entities. In this case, it may be more useful to represent large models as compositions of distinct components. In Model Composition one thinks of models not as monolithic entities but rather as collections of smaller components (submodels) joined together. A composed model is built from two or more submodels by describing their redundancies and interactions. While it is appealing in the short term to build larger models from pre-existing models, each developed independently for their own purposes, we believe that ultimately it will become necessary to build large models from components that have been designed for the purpose of combining them. We define Model Aggregation as a restricted form of composition that represents a collection of model elements as a single entity (a "module"). A module contains a definition of pre-determined input and output ports. The process of aggregation (connecting modules via their interface ports) allows modelers to create larger models in a controlled manner. Model Flattening converts a composed or aggregated model with some hierarchy or connections to one without such connections. The relationships used to describe the interactions among the submodels are lost, as the composed or aggregated model is converted into a single large (flat) model. Flattening allows us to use existing simulation tools, which have no support for composition or aggregation. / Ph. D.
19

Modeling, Dynamics, and Control of Tethered Satellite Systems

Ellis, Joshua Randolph 07 April 2010 (has links)
Tethered satellite systems (TSS) can be utilized for a wide range of space-based applications, such as satellite formation control and propellantless orbital maneuvering by means of momentum transfer and electrodynamic thrusting. A TSS is a complicated physical system operating in a continuously varying physical environment, so most research on TSS dynamics and control makes use of simplified system models to make predictions about the behavior of the system. In spite of this fact, little effort is ever made to validate the predictions made by these simplified models. In an ideal situation, experimental data would be used to validate the predictions made by simplified TSS models. Unfortunately, adequate experimental data on TSS dynamics and control is not readily available at this time, so some other means of validation must be employed. In this work, we present a validation procedure based on the creation of a top-level computational model, the predictions of which are used in place of experimental data. The validity of all predictions made by lower-level computational models is assessed by comparing them to predictions made by the top-level computational model. In addition to the proposed validation procedure, a top-level TSS computational model is developed and rigorously verified. A lower-level TSS model is used to study the dynamics of the tether in a spinning TSS. Floquet theory is used to show that the lower-level model predicts that the pendular motion and transverse elastic vibrations of the tether are unstable for certain in-plane spin rates and system mass properties. Approximate solutions for the out-of-plane pendular motion are also derived for the case of high in-plane spin rates. The lower-level system model is also used to derive control laws for the pendular motion of the tether. Several different nonlinear control design techniques are used to derive the control laws, including methods that can account for the effects of dynamics not accounted for by the lower-level model. All of the results obtained using the lower-level system model are compared to predictions made by the top-level computational model to assess their validity and applicability to an actual TSS. / Ph. D.
20

Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device

Freeman, Jacob Andrew 07 September 2012 (has links)
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty. / Ph. D.

Page generated in 0.1852 seconds