• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1480
  • 507
  • 453
  • 216
  • 131
  • 121
  • 78
  • 35
  • 26
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • Tagged with
  • 3705
  • 689
  • 638
  • 565
  • 391
  • 361
  • 334
  • 334
  • 298
  • 292
  • 240
  • 230
  • 220
  • 208
  • 181
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

A study of how maintenance management is being practised in an organization /

Chow, Shu-lin, Alan. January 1986 (has links)
Thesis (M.B.A.)--University of Hong Kong, 1986.
222

Diagnosing and tolerating bugs in deployed systems

Bond, Michael David 09 October 2012 (has links)
Deployed software is never free of bugs. These bugs cause software to fail, wasting billions of dollars and sometimes causing injury or death. Bugs are pervasive in modern software, which is increasingly complex due to demand for features, extensibility, and integration of components. Complete validation and exhaustive testing are infeasible for substantial software systems, and therefore deployed software exhibits untested and unanalyzed behaviors. Software behaves differently after deployment due to different environments and inputs, so developers cannot find and fix all bugs before deploying software, and they cannot easily reproduce post-deployment bugs outside of the deployed setting. This dissertation argues that post-deployment is a compelling environment for diagnosing and tolerating bugs, and it introduces a general approach called post-deployment debugging. Techniques in this class are efficient enough to go unnoticed by users and accurate enough to find and report the sources of errors to developers. We demonstrate that they help developers find and fix bugs and help users get more functionality out of failing software. To diagnose post-deployment failures, programmers need to understand the program operations--control and data flow--responsible for failures. Prior approaches for widespread tracking of control and data flow often slow programs by two times or more and increase memory usage significantly, making them impractical for online use. We present novel techniques for representing control and data flow that add modest overhead while still providing diagnostic information directly useful for fixing bugs. The first technique, probabilistic calling context (PCC), provides low-overhead context sensitivity to dynamic analyses that detect new or anomalous deployed behavior. Second, Bell statistically correlates control flow with data, and it reconstructs program locations associated with data. We apply Bell to leak detection, where it tracks and reports program locations responsible for real memory leaks. The third technique, origin tracking, tracks the originating program locations of unusable values such as null references, by storing origins in place of unusable values. These origins are cheap to track and are directly useful for diagnosing real-world null pointer exceptions. Post-deployment diagnosis helps developers find and fix bugs, but in the meantime, users need help with failing software. We present techniques that tolerate memory leaks, which are particularly difficult to diagnose since they have no immediate symptoms and may take days or longer to materialize. Our techniques effectively narrow the gap between reachability and liveness by providing the illusion that dead but reachable objects do not consume resources. The techniques identify stale objects not used in a while and remove them from the application and garbage collector’s working set. The first technique, Melt, relocates stale memory to disk, so it can restore objects if the program uses them later. Growing leaks exhaust the disk eventually, and some embedded systems have no disk. Our second technique, leak pruning, addresses these limitations by automatically reclaiming likely leaked memory. It preserves semantics by waiting until heap exhaustion to reclaim memory--then intercepting program attempts to access reclaimed memory. We demonstrate the utility and efficiency of post-deployment debugging on large, real-world programs--where they pinpoint bug causes and improve software availability. Post-deployment debugging efficiently exposes and exploits programming language semantics and opens up a promising direction for improving software robustness. / text
223

Management and maintenance of building: strategy to solve long standing building problem in Hong Kong

Mak, Kai-wah, 麥啟華 January 2006 (has links)
published_or_final_version / Housing Management / Master / Master of Housing Management
224

Computer data base assessment of masonry bridges

Sihwa, L. January 1987 (has links)
This thesis is concerned about the development of computer data base management system for the assessment of masonry bridges. The various techniques of assessment and remedial measures of masonry bridges are outlined, their shortcomings described. A justification for an alternative method of assessment is given. A review of computer data base management systems is carried out. The reasons for adopting data base management systems is given as well as the reasons for choosing a particular type of data base management system. The common faults associated with masonry structures are described and the problems of identifying these faults are described. The part played by the individual components of a masonry arch bridge is given and the significance of faults on the individual components of the structure is described. A detailed description of the type, in general of the data base system chosen is given followed by a detailed description of a special case of the type chosen, which is the system that was used for the project. A description of how the system was developed is given followed by the way the system operates. A detailed description of how the system can be used is then put forward and the problems associated with the development of the system are outlined. Finally, a description of the implications of the system to the practising engineer is given.
225

A ROBUST METHOD FOR USING MAINTAINABILITY COST MODELS (RELIABILITY, OPTIMIZATION, SENSITIVITY, UNCERTAINTY)

Lewis, Doris Trinh, 1957- January 1986 (has links)
No description available.
226

From 'Fair Trade' to Fairtrade : the politics of values and ethical standard setting

Reinecke, Juliane Theresa Ute January 2011 (has links)
No description available.
227

An operations effectiveness model for automotive service systems

Rezai, Soheil 08 1900 (has links)
No description available.
228

Development of criteria for the construction of the most favorable network for short-run maintenance projects

Pretoni, José Alfredo 05 1900 (has links)
No description available.
229

Inverse software configuration management

McCrindle, Rachel Jane January 1998 (has links)
Software systems are playing an increasingly important role in almost every aspect of today’s society such that they impact on our businesses, industry, leisure, health and safety. Many of these systems are extremely large and complex and depend upon the correct interaction of many hundreds or even thousands of heterogeneous components. Commensurate with this increased reliance on software is the need for high quality products that meet customer expectations, perform reliably and which can be cost-effectively and safely maintained. Techniques such as software configuration management have proved to be invaluable during the development process to ensure that this is the case. However, there are a very large number of legacy systems which were not developed under controlled conditions, but which still, need to be maintained due to the heavy investment incorporated within them. Such systems are characterised by extremely high program comprehension overheads and the probability that new errors will be introduced during the maintenance process often with serious consequences. To address the issues concerning maintenance of legacy systems this thesis has defined and developed a new process and associated maintenance model, Inverse Software Configuration Management (ISCM). This model centres on a layered approach to the program comprehension process through the definition of a number of software configuration abstractions. This information together with the set of rules for reclaiming the information is stored within an Extensible System Information Base (ESIB) via, die definition of a Programming-in-the- Environment (PITE) language, the Inverse Configuration Description Language (ICDL). In order to assist the application of the ISCM process across a wide range of software applications and system architectures, die PISCES (Proforma Identification Scheme for Configurations of Existing Systems) method has been developed as a series of defined procedures and guidelines. To underpin the method and to offer a user-friendly interface to the process a series of templates, the Proforma Increasing Complexity Series (PICS) has been developed. To enable the useful employment of these techniques on large-scale systems, the subject of automation has been addressed through the development of a flexible meta-CASE environment, the PISCES M4 (MultiMedia Maintenance Manager) system. Of particular interest within this environment is the provision of a multimedia user interface (MUI) to die maintenance process. As a means of evaluating the PISCES method and to provide feedback into die ISCM process a number of practical applications have been modelled. In summary, this research has considered a number of concepts some of which are innovative in themselves, others of which are used in an innovative manner. In combination these concepts may be considered to considerably advance the knowledge and understanding of die comprehension process during the maintenance of legacy software systems. A number of publications have already resulted from the research and several more are in preparation. Additionally a number of areas for further study have been identified some of which are already underway as funded research and development projects.
230

Acquiring data designs from existing data-intensive programs

Yang, Hongji January 1994 (has links)
The problem area addressed in this thesis is extraction of a data design from existing data intensive program code. The purpose of this is to help a software maintainer to understand a software system more easily because a view of a software system at a high abstraction level can be obtained. Acquiring a data design from existing data intensive program code is an important part of reverse engineering in software maintenance. A large proportion of software systems currently needing maintenance is data intensive. The research results in this thesis can be directly used in a reverse engineering tool. A method has been developed for acquiring data designs from existing data intensive programs, COBOL programs in particular. Program transformation is used as the main tool. Abstraction techniques and the method of crossing levels of abstraction are also studied for acquiring data designs. A prototype system has been implemented based on the method developed. This involved implementing a number of program transformations for data abstraction, and thus contributing to the production of a tool. Several case studies, including one case study using a real program with 7000 Hues of source code, are presented. The experiment results show that the Entity-Relationship Attribute Diagrams derived from the prototype can represent the data designs of the original data intensive programs. The original contribution of the thesis is that the approach presented in this thesis can identify and extract data relationships from the existing code by combining analysis of data with analysis of code. The approach is believed to be able to provide better capabilities than other work in the field. The method has indicated that acquiring a data design from existing data intensive program code by program transformation with human assistance is an effective method in software maintenance. Future work is suggested at the end of the thesis including extending the method to build an industrial strength tool.

Page generated in 0.068 seconds