• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 1
  • Tagged with
  • 22
  • 22
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Study of Software Size Estimation using Function Point

Wang, Der-Rong 11 July 2003 (has links)
Software size estimation has been long a challenging task over a software development process. This paper presents an approach that uses the function point analysis to estimate program coding and testing effort in a MIS department, which maintains an ERP system with low employee transfer rate. The method first analyzes the historical data using regression analysis, and then builds a software estimation model with elaborated coefficients for related parameters. The estimation model is tested with the remaining set of historical data to evaluate its predict accuracy. It is shown that the size estimation model can be as accurate as about 90% correctness. Thus it is useful not only in company-wide information resource allocation, but also in performance evaluation of software engineers.
2

Integrating GIS approaches with geographic profiling as a novel conservation tool

Faulkner, Sally January 2018 (has links)
Geographic profiling (GP) was originally developed to solve the problem of information overload when dealing with cases of serial crime. In criminology, the model uses spatial data relating to the locations of connected crimes to prioritise the search for the criminal's anchor point (usually a home or workplace), and is extremely successful in this field. Previous work has shown how the same approach can be adapted to biological data, but to date the model has assumed a spatially homogenous landscape, and has made no attempt to integrate more complex spatial information (eg, altitude, land use). It is this issue that I address here. In addition, I show for the first time how the model can be applied to conservation data and - taking the model back to its origins in criminology - to wildlife crime. In Chapter 2, I use the Dirichlet Process Mixture (DPM) model of geographic profiling to locate sleep trees for tarsiers in dense jungle in Indonesia, using as input the locations at which calls were recorded, demonstrating how the model can be applied to locating the nests, dens or roosts of other elusive animals and potentially improving estimates of population size, with important implications for management of both species and habitats. In Chapter 3, I show how spatial information in the form of citizen science could be used to improve a study of invasive mink in the Hebrides. In Chapter 4, I turn to the issue of 'commuter crime' in a study of poaching in Savé Valley Conservancy (SVC) in Zimbabwe, in which although poaching occurs inside SVC the majority of poachers live outside, showing how the model can be adjusted to reflect a simple binary classification of the landscape (inside or outside SVC). Finally, in Chapter 5, I combine more complex land use information (estimates of farm density) with the GP model to improve predictions of human-wildlife conflict.
3

Software Size Estimation Performance Of Small And Middle Size Firms In Turkey

Colak, Erdem 01 September 2010 (has links) (PDF)
Software cost estimation is essential for software companies to be more competitive and more profitable. The objective of this thesis is to study current software size estimation practices adopted by Turkish software companies, to identify best prac-tices, and to suggest appropriate methods that can help companies to reduce errors in their software size estimations.
4

Estimating the necessary sample size for a binomial proportion confidence interval with low success probabilities

Ahlers, Zachary January 1900 (has links)
Master of Science / Department of Statistics / Christopher Vahl / Among the most used statistical concepts and techniques, seen even in the most cursory of introductory courses, are the confidence interval, binomial distribution, and sample size estimation. This paper investigates a particular case of generating a confidence interval from a binomial experiment in the case where zero successes are expected. Several current methods of generating a binomial proportion confidence interval are examined by means of large-scale simulations and compared in order to determine an ad-hoc method for generating a confidence interval with coverage as close as possible to nominal while minimizing width. This is then used to construct a formula which allows for the estimation of a sample size necessary to obtain a sufficiently narrow confidence interval (with some predetermined probability of success) using the ad-hoc method given a prior estimate of the probability of success for a single trial. With this formula, binomial experiments could potentially be planned more efficiently, allowing researchers to plan only for the amount of precision they deem necessary, rather than trying to work with methods of producing confidence intervals that result in inefficient or, at worst, meaningless bounds.
5

Statistical modelling of ECDA data for the prioritisation of defects on buried pipelines

Bin Muhd Noor, Nik Nooruhafidzi January 2017 (has links)
Buried pipelines are vulnerable to the threat of corrosion. Hence, they are normally coated with a protective coating to isolate the metal substrate from the surrounding environment with the addition of CP current being applied to the pipeline surface to halt any corrosion activity that might be taking place. With time, this barrier will deteriorate which could potentially lead to corrosion of the pipe. The External Corrosion Direct Assessment (ECDA) methodology was developed with the intention of upholding the structural integrity of pipelines. Above ground indirect inspection techniques such as the DCVG which is an essential part of an ECDA, is commonly used to determine coating defect locations and measure the defect's severity. This is followed by excavation of the identified location for further examination on the extent of pipeline damage. Any coating or corrosion defect found at this stage is repaired and remediated. The location of such excavations is determined by the measurements obtained from the DCVG examination in the form of %IR and subjective inputs from experts which bases their justification on the environment and the physical characteristics of the pipeline. Whilst this seems to be a straight forward process, the factors that comes into play which gave rise to the initial %IR is not fully understood. The lack of understanding with the additional subjective inputs from the assessors has led to unnecessary excavations being conducted which has put tremendous financial strain on pipeline operators. Additionally, the threat of undiscovered defects due to the erroneous nature of the current method has the potential to severely compromise the pipeline's safe continual operation. Accurately predicting the coating defect size (TCDA) and interpretation of the indication signal (%IR) from an ECDA is important for pipeline operators to promote safety while keeping operating cost at a minimum. Furthermore, with better estimates, the uncertainty from the DCVG indication is reduced and the decisions made on the locations of excavation is better informed. However, ensuring the accuracy of these estimates does not come without challenges. These challenges include (1) the need of proper methods for large data analysis from indirect assessment and (2) uncertainty about the probability distribution of quantities. Standard mean regression models e.g. the OLS, were used but fail to take the skewness of the distributions involved into account. The aim of this thesis is thus, to come up with statistical models to better predict TCDA and to interpret the %IR from the indirect assessment of an ECDA more precisely. The pipeline data used for the analyses is based on a recent ECDA project conducted by TWI Ltd. for the Middle Eastern Oil Company (MEOC). To address the challenges highlighted above, Quantile Regression (QR) was used to comprehensively characterise the underlying distribution of the dependent variable. This can be effective for example, when determining the different effect of contributing variables towards different sizes of TCDA (different quantiles). Another useful advantage is that the technique is robust to outliers due to its reliance on absolute errors. With the traditional mean regression, the effect of contributing variables towards other quantiles of the dependent variable is ignored. Furthermore, the OLS involves the squaring of errors which makes it less robust to outliers. Other forms of QR such as the Bayesian Quantile Regression (BQR) which has the advantage of supplementing future inspection projects with prior data and the Logistic Quantile Regression (LQR) which ensures the prediction of the dependent variable is within its specified bounds was applied to the MEOC dataset. The novelty of research lies in the approaches (methods) taken by the author in producing the models highlighted above. The summary of such novelty includes: * The use of non-linear Quantile Regression (QR) with interacting variables for TCDA prediction. * The application of a regularisation procedure (LASSO) for the generalisation of the TCDA prediction model.* The usage of the Bayesian Quantile Regression (BQR) technique to estimate the %IR and TCDA. * The use of Logistic Regression as a guideline towards the probability of excavation * And finally, the use of Logistic Quantile Regression (LQR) in ensuring the predicted values are within bounds for the prediction of the %IR and POPD. Novel findings from this thesis includes: * Some degree of relationship between the DCVG technique (%IR readings) and corrosion dimension. The results of the relationship between TCDA and POPD highlights a negative trend which further supports the idea that %IR has some relation to corrosion. * Based on the findings from Chapter 4, 5 and 6 suggests that corrosion activity rate is more prominent than the growth of TCDA at its median depth. It is therefore suggested that for this set of pipelines (those belonging to MEOC) repair of coating defects should be done before the coating defect has reached its median size. To the best of the Author's knowledge, the process of employing such approaches has never been applied before towards any ECDA data. The findings from this thesis also shed some light into the stochastic nature of the evolution of corrosion pits. This was not known before and is only made possible by the usage of the approaches highlighted above. The resulting models are also of novelty since no previous model has ever been developed based on the said methods. The contribution to knowledge from this research is therefore the greater understanding of relationship between variables stated above (TCDA, %IR and POPD). With this new knowledge, one has the potential to better prioritise location of excavation and better interpret DCVG indications. With the availability of ECDA data, it is also possible to predict the magnitude of corrosion activity by using the models developed in this thesis. Furthermore, the knowledge gained here has the potential to translate into cost saving measures for pipeline operators while ensuring safety is properly addressed.
6

E-cosmic: A Business Process Model Based Functional Size Estimation Approach

Kaya, Mahir 01 February 2010 (has links) (PDF)
The cost and effort estimation of projects depend on software size. A software product size is needed at as early a phase of the project as possible. Conventional Early Functional Size Estimation methods generate size at the early phase but result in subjectivity and unrepeatability due to manual calculation. On the other hand, automated Functional Size Measurement calculation approaches require constructs which are available in considerably late software development phases. In this study we developed an approach called e-Cosmic to calculate and automate the functional size measurement based on the business processes. Functions and input and output relationship types of each function are identified in the business process model. The size of each relationship type is determined by assigning appropriate data movements based on the COSMIC Measurement Manual. Then, relationship type size is aggregated to produce the size of each function. The size of the software product is the sum of the size of these functions. Automation of this process based on business process model is performed by developing a script in the ARIS tool concept. Three case studies were conducted to validate the proposed functional size estimation method (e-Cosmic). The size of the products in the case studies are measured manually with COSMIC FSM (Abran et al, 2007) as well as using a conventional early estimation method, called Early and Quick COSMIC FFP. We compared the results of different approaches and discussed the usability of e-Cosmic based on the findings.
7

Two algorithms for leader election and network size estimation in mobile ad hoc networks

Neumann, Nicholas Gerard 17 February 2005 (has links)
We develop two algorithms for important problems in mobile ad hoc networks (MANETs). A MANET is a collection of mobile processors (“nodes”) which communicate via message passing over wireless links. Each node can communicate directly with other nodes within a specified transmission radius; other communication is accomplished via message relay. Communication links may go up and down in a MANET (as nodes move toward or away from each other); thus, the MANET can consist of multiple connected components, and connected components can split and merge over time. We first present a deterministic leader election algorithm for asynchronous MANETs along with a correctness proof for it. Our work involves substantial modifications of an existing algorithm and its proof, and we adapt the existing algorithm to the asynchronous environment. Our algorithm’s running time and message complexity compare favorably with existing algorithms for leader election in MANETs. Second, many algorithms for MANETs require or can benefit from knowledge about the size of the network in terms of the number of processors. As such, we present an algorithm to approximately determine the size of a MANET. While the algorithm’s approximations of network size are only rough ones, the algorithm has the important qualities of requiring little communication overhead and being tolerant of link failures.
8

Applications of a Novel Sampling Technique to Fully Dynamic Graph Algorithms

Mountjoy, Benjamin 11 September 2013 (has links)
In this thesis we study the application of a novel sampling technique to building fully-dynamic randomized graph algorithms. We present the following results: \begin{enumerate} \item A randomized algorithm to estimate the size of a cut in an undirected graph $G = (V, E)$ where $V$ is the set of nodes and $E$ is the set of edges and $n = |V|$ and $m = |E|$. Our algorithm processes edge insertions and deletions in $O(\log^2n)$ time. For a cut $(U, V\setminus U)$ of size $K$ for any subset $U$ of $V$, $|U| < |V|$ our algorithm returns an estimate $x$ of the size of the cut satisfying $K/2 \leq x \leq 2K$ with high probability in $O(|U|\log n)$ time. \item A randomized distributed algorithm for maintaining a spanning forest in a fully-dynamic synchronous network. Our algorithm maintains a spanning forest of a graph with $n$ nodes, with worst case message complexity $\tilde{O}(n)$ per edge insertion or deletion where messages are of size $O(\text{polylog}(n))$. For each node $v$ we require memory of size $\tilde{O}(degree(v))$ bits. This improves upon the best previous algorithm with respect to worst case message complexity, given by Awerbuch, Cidon, and Kutten, which has an amortized message complexity of $O(n)$ and worst case message complexity of $O(n^2)$. \end{enumerate} / Graduate / 0984 / b_mountjoy9@hotmail.com
9

Robust Experimental Design for Speech Analysis Applications

January 2020 (has links)
abstract: In many biological research studies, including speech analysis, clinical research, and prediction studies, the validity of the study is dependent on the effectiveness of the training data set to represent the target population. For example, in speech analysis, if one is performing emotion classification based on speech, the performance of the classifier is mainly dependent on the number and quality of the training data set. For small sample sizes and unbalanced data, classifiers developed in this context may be focusing on the differences in the training data set rather than emotion (e.g., focusing on gender, age, and dialect). This thesis evaluates several sampling methods and a non-parametric approach to sample sizes required to minimize the effect of these nuisance variables on classification performance. This work specifically focused on speech analysis applications, and hence the work was done with speech features like Mel-Frequency Cepstral Coefficients (MFCC) and Filter Bank Cepstral Coefficients (FBCC). The non-parametric divergence (D_p divergence) measure was used to study the difference between different sampling schemes (Stratified and Multistage sampling) and the changes due to the sentence types in the sampling set for the process. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2020
10

Calculating power for the Finkelstein and Schoenfeld test statistic

Zhou, Thomas J. 07 March 2022 (has links)
The Finkelstein and Schoenfeld (FS) test is a popular generalized pairwise comparison approach to analyze prioritized composite endpoints (e.g., components are assessed in order of clinical importance). Power and sample size estimation for the FS test, however, are generally done via simulation studies. This simulation approach can be extremely computationally burdensome, compounded by an increasing number of composite endpoints and with increasing sample size. We propose an analytic solution to calculate power and sample size for commonly encountered two-component hierarchical composite endpoints. The power formulas are derived assuming underlying distributions in each of the component outcomes on the population level, which provide a computationally efficient and practical alternative to the standard simulation approach. The proposed analytic approach is extended to derive conditional power formulas, which are used in combination with the promising zone methodology to perform sample size re-estimation in the setting of adaptive clinical trials. Prioritized composite endpoints with more than two components are also investigated. Extensive Monte Carlo simulation studies were conducted to demonstrate that the performance of the proposed analytic approach is consistent with that of the standard simulation approach. We also demonstrate through simulations that the proposed methodology possesses generally desirable objective properties including robustness to mis-specified underlying distributional assumptions. We illustrate our proposed methods through application of the proposed formulas by calculating power and sample size for the Transthyretin Amyloidosis Cardiomyopathy Clinical Trial (ATTR-ACT) and the EMPULSE trial for empagliozin treatment of acute heart failure.

Page generated in 0.1338 seconds