• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The design of predictable multi-core processors which support time-triggered software architectures

Athaide, Keith Florence January 2011 (has links)
Safety-critical systems – such as those used in the medical, automotive and aerospace fields – have a crucial dependence on the reliable functioning of one or more embedded processors. In such systems, a co-operative software design methodology can be used to guarantee a high degree of reliability; when coupled with a time-triggered architecture, this methodology can result in robust and predictable systems with a comparatively simple software design, low operating system overhead, easier testability, greater certification support and tight jitter control. Nevertheless, the use of a co-operative design methodology is not always appropriate, since it may negatively affect system responsiveness and can add to the maintenance costs. Many alternatives have been researched and implemented over the past few decades to address such concerns, albeit by compromising on some of the benefits this architecture provides. This thesis makes five main contributions to tackle the major obstacles to single-processor time-triggered co-operative designs: • it proposes and describes the implementation of a novel multi-core processor with two capable software scheduler implementations that allow application software to be designed as for a single-core system; • it describes the internalisation of these scheduler implementations into hardware which allows application software to use all available computing capacity; • it describes a hardware technique to eliminate the variations in starting times of application software, thereby increasing the stability of applications; • it describes the implementation of a hardware technique for sharing input/output resources amongst application software with increased determinism by leveraging the time-triggered nature of the underlying system; • it describes the implementation of a predictable processor that supports purely co-operative software and is suitable for the secondary cores on a multi-core design (due to its small size). Overall, the contributions of this thesis both increase system responsiveness and lessen the impact of seemingly innocuous maintenance activities.
22

Managing mismatches in COTS-based development

Alves, Carina Frota January 2005 (has links)
The prospect of reducing the time, cost and risk of system development has increased the interest in developing systems from COTS (Commercial-Off-The-Shelf) products. The development of complex COTS-based systems is known to be an intricate and risk prone process. There are three main reasons for this. Firstly, suppliers develop COTS products with the objective of satisfying the needs of the marketplace rather than the specific needs of the acquirer organization. Secondly, COTS products are often delivered as black-boxes, which means that the understanding of COTS features is frequently partial and uncertain on the part of the acquirer. Thirdly, in order to sustain competitive advantage, suppliers regularly modify their products, hence forcing customers to update their systems. These challenges do not occur in traditional software development, they are particular attributes of COTS-based development (CBD). As a result, new processes, methods and models are needed to support the development of COTS-based systems. The selection of COTS products is one of the most important activities taking place in the context of CBD. It involves the assessment of how well COTS products satisfy customer requirements. Due to the nature of COTS, mismatches may occur between what is wanted from the system (i.e. customer requirements) and what the system is able to provide (i.e. its features). In addition, a number of risks may arise from these mismatches, such as insufficient COTS adherence to requirements, low confidence in COTS quality, and unwanted COTS features. We argue that the successful selection of a suitable COTS product depends on the effective analysis of mismatches and management of risks. This thesis proposes a novel method, called TAOS (Tradeoff Analysis for COTS-based Systems), to guide the selection of COTS products. TAOS offers a systematic approach to assess the suitability of COTS products by exploring mismatches, handling risks and suggesting possible tradeoffs to be made. The method uses a goal-oriented approach to specify the requirements of the acquirer organization. We demonstrate how utility theory can be used to compare COTS alternatives by examining the degree to which COTS products satisfy requirements, and therefore inform the decision making process. As a way to complement the quantitative assessment obtained from the use of utility theory, we present a set of templates to build exploratory scenarios and define heuristics to facilitate the tradeoff analysis. We also present a strategy to identify and manage risks. To establish the effectiveness of TAOS to improve the quality of decisions made during the selection process, we have conducted a number of case studies in different domains.
23

The utility of using a RAD-type development approach for a large, complex information system

Berger, Hilary January 2005 (has links)
Rapid Application Development (RAD) is an iterative and incremental development approach that evolved to address problems associated with the more structured development approaches such as the Waterfall Model. Even though RAD is becoming an increasingly accepted development approach, much of the existing literature focuses on small to medium sized development projects. There is considerable debate about its application for large, complex IS development arenas. This research utilises a development project currently being implemented within UK Regional Government as a case study. It represents an atypical opportunity to examine the utility of RAD within a large, complex IS development, presenting the real-life context, experiences and commentary from individuals directly involved. An interpretive stance is adopted to gain a broad view of the organizational environment of the IS and the wider external context within which the information system is related. An ethnographic approach was selected enabling a richer and deeper interpretation, and a more comprehensive understanding of the issues under investigation. This methodology included non-participatory observation, indirect observation and informal semi-structured interviews. It also involved multiple strategies of data collection and analysis to facilitate cross-checking and to yield stronger substantiation of analysis. The thesis examines the cultural aspects inherent in the studied environment that impacted severely upon the project. It further explores a number of other issues held problematic for large and complex development arenas: managing user involvement and expectations, communications, requirements elicitation, decision-making and testing. Analysis of the case study material has aided the production of a model of critical success factors that could be applied to other such environments adopting a RAD-type approach. Thus it contributes to the field of IS knowledge by informing the debate surrounding applicability of RAD across large, complex development arenas.
24

Network analysis of large scale object oriented software systems

Pakhira, Anjan January 2013 (has links)
The evolution of software engineering knowledge, technology, tools, and practices has seen progressive adoption of new design paradigms. Currently, the predominant design paradigm is object oriented design. Despite the advocated and demonstrated benefits of object oriented design, there are known limitations of static software analysis techniques for object oriented systems, and there are many current and legacy object oriented software systems that are difficult to maintain using the existing reverse engineering techniques and tools. Consequently, there is renewed interest in dynamic analysis of object oriented systems, and the emergence of large and highly interconnected systems has fuelled research into the development of new scalable techniques and tools to aid program comprehension and software testing. In dynamic analysis, a key research problem is efficient interpretation and analysis of large volumes of precise program execution data to facilitate efficient handling of software engineering tasks. Some of the techniques, employed to improve the efficiency of analysis, are inspired by empirical approaches developed in other fields of science and engineering that face comparable data analysis challenges. This research is focused on application of empirical network analysis measures to dynamic analysis data of object oriented software. The premise of this research is that the methods that contribute significantly to the object collaboration network's structural integrity are also important for delivery of the software system’s function. This thesis makes two key contributions. First, a definition is proposed for the concept of the functional importance of methods of object oriented software. Second, the thesis proposes and validates a conceptual link between object collaboration networks and the properties of a network model with power law connectivity distribution. Results from empirical software engineering experiments on JHotdraw and Google Chrome are presented. The results indicate that five considered standard centrality based network measures can be used to predict functionally important methods with a significant level of accuracy. The search for functional importance of software elements is an essential starting point for program comprehension and software testing activities. The proposed definition and application of network analysis has the potential to improve the efficiency of post release phase software engineering activities by facilitating rapid identification of potentially functionally important methods in object oriented software. These results, with some refinement, could be used to perform change impact prediction and a host of other potentially beneficial applications to improve software engineering techniques.
25

Runtime value specialization

Bilianou, Panagiota January 2008 (has links)
As programming has become more modular and component based, it is easier to maintain and favours reusability. However, in most cases, programs are being used for a single or a few certain tasks. Using general purpose software in order to perform a certain task has a loss in performance. Optimizing compilers have been given the task to define and optimize over common case scenarios and at the same time guarantee correct behaviour for the uncommon cases. Numerous compiler optimizations have been proposed, some static and others dynamic, i. e. performed at runtime. The work described in this thesis focuses on dynamic optimizations because they have some advantages over the static ones, such as access to runtime state, transparency and adaptability. More specifically, the work described in this thesis is about value specialization, a technique which eliminates computation that is based on constant values or values that are constant over a certain period of the program execution. In this thesis a specializer was implemented that applies to the category of object oriented programming language specializers. The specializer is part of a Java virtual machine, Jikes RVM, so it could take advantage of the runtime values that are unknown during the pre-run phase of the program
26

Factors contributing to information technology software project risk : perceptions of software practitioners

Rahim, Faizul Azli Mohd January 2011 (has links)
The majority of software development projects normally connected with the application of new or advanced technologies. The use of advanced and, in most cases, unproven technology on software development projects could leads to a large number of risks. Every aspect of a software development project could be influenced by risks that could cause project failure. Generally, the success of a software development project is directly connected with the involved risk, i.e. project risks should be successfully mitigated in order to finish a software development project within the budget allocated. One of the early researches on risk of software projects was conducted by Boehm (1991) where the research identified top 10 risk factors for software project. Boehm research had been the starting point of research in risk of software projects. For the past 10-15 years, many researches had been conducted with the introduction of frameworks and guidelines. However, still software development project failures had been reported in the academic literatures. Researchers and practitioners in this area have long been concerned with the difficulties of managing the risks relating to the development and implementation of IT software projects. This research is concerned specifically with the risk of failure of IT software projects, and how related risk constructs are framed. Extant research highlights the need for further research into how a theoretically coherent risk construct can be combined with an empirical validation of the links between risk likelihood, risk impact on cost overrun, and evidence of strategic responses in terms of risk mitigation The proposal within this research is to address this aspect of the debate by seeking to clarify the role of a project life cycle as a frame of reference that contracting parties might agree upon and which should act as the basis for the project risk construct. The focus on the project life cycle as a risk frame of reference potentially leads to a common, practical view of the (multi) dimensionality setting of risk within which risk factors may be identified and which believe to be grounded across a wide range of projects and, specifically in this research, for IT software projects. The research surveyed and examine the opinions of professionals in IT and software companies. We assess which risk factors are most likely to occur in IT software projects; evaluate risk impact by assessing which risk factors IT professionals specifically think are likely to give rise to the possibility of cost overruns; and we empirically link which risk mitigation strategies are most likely to be employed in practice as a response to the risks and impacts identified. The data obtained were processed, analysed and ranked. By using the EXCEL and SPSS for factor analysis, all the risk factors were reduced and groups into clusters and components for further correlation analysis. The analysis was able to evidence opinion on risk likelihood, the impact of the risk of cost overrun, and the strategic responses that are likely to be effective in mitigating the risks that emerge in IT software projects. The analysis indicates that it is possible to identify a grouping of risk that is reflective of the different stages of the project life cycle which suggest three identifiable groups when viewing risk from the likelihood of occurrence and three identifiable groups from a cost overrun perspective. The evidence relating to the cost overrun view of risk provided a stronger view of which components of risk were important compared with risk likelihood. The research account for this difference by suggesting that a more coherent framework, or risk construct, offered by viewing risk within the context of a project life cycle allows those involved in IT software projects to have a clearer view of the relationships between risk factors. It also allows the various risk components and the associated emergent clusters to be more readily identifiable. The research on strategic response indicated different strategies as being effective between risk likelihood versus cost overrun. The study was able to verify the effective mitigation strategies that are correlated to the risk components. In this way, the actions or consequences conditioned can be observed on identification of risk likelihood and risk impact on cost overrun. However, the focus of attention on technical issues and the degree to which they attract strategic response is a new finding in addition to the usual reports concerning the importance of non-technical aspects of IT software projects. The research also developed a fuzzy theory based model to assist software practitioners in the software development life cycle. This model could help the practitioners in the decision making of dealing with risks in the software project. The contribution of the research relates to the assessment of risk within a construct that is defined in the context of a fairly broadly accepted view of the life cycle of projects. The software risk construct based on the project management framework proposed in this research could facilitates a focus on roles and responsibilities, and allows for the coordination and integration of activities for regular monitoring and aligning with the project goals. This contribution would better enable management to identify and manage risk as they emerge with project stages and more closely reflect project activity and processes and facilitate the risk management strategies exercise. Keywords: risk management, project planning, IT implementation, project life cycle
27

Adaptive duty cycling in mobile sensor networks

Dyo, V. January 2009 (has links)
Mobile wireless sensor networks have recently attracted considerable attention. In particular, there is a significant interest in applying mobile sensor networks to wildlife and environmental monitoring, medical, and human-centric applications. Energy is a critical factor for most real deployments of sensor networks: many mobile applications, such as environmental monitoring ones, require months of unattended operation of large number of small battery-operated nodes. Due to slow advancements in battery and energy harvesting technologies, energy efficiency will remain an important issue for a long time. The general approach to energy saving in wireless sensor networks is to coordinate the wake-up time of nodes to maximize their sleep time while achieving application goals such as low latency or high throughput. A number of duty-cycling solutions have been proposed for static sensor networks. These solutions often assume a fixed topology and use scheduling techniques to coordinate the wakeup of nodes depending on traffic flows. However, in some applications, a fixed network topology cannot be assumed as some sensors are mobile: duty cycling in mobile networks is challenging because nodes need to continuously scan for neighbours, which is an energy intensive process. This thesis investigates the issues related to duty cycling of mobile wireless networks. We argue that duty cycling in mobile networks has to be adaptive to both mobility and traffic patterns. The thesis presents a two-level approach which exploits temporal connectivity patterns and offers practical techniques for duty cycling in sparse and dense scenarios. At macro level, the uncertainty of node discovery is a primary reason of power consumption, which causes the mobile nodes to periodically scan or listen for neighbours, draining battery. At this level, the approach is based on adapting the node discovery procedure to temporal activity patterns inherent to human-centric and animal-centric applications. At micro-level, the uncertainty of packet arrival is a major source of power consumption, and the approach mitigates this by using short-term synchronization, which constrains the packet arrival time to predefined time slots. The evaluation of the approach is performed through both simulation & implementation and deployment on a real sensor testbed. In particular, the performance of the macro level protocol is evaluated through simulation with real human and animal mobility traces and through deployment in Wytham Woods (Oxford) for badger tracking. The performance of micro-level protocol is evaluated through measurements on a small scale testbed.
28

Construction heuristics for hard combinatorial optimisation problems

Zverovitch, Alexei E. January 2003 (has links)
No description available.
29

Stakeholder negotiations in component based development

Sampat, Nilesh Mahendrakumar January 2004 (has links)
No description available.
30

Predicting change propagation : algorithms, representations, software tools

Keller, René January 2007 (has links)
No description available.

Page generated in 0.0428 seconds