Spelling suggestions: "subject:"databases,"" "subject:"atabases,""
891 |
Timing Observations From Rossi X-ray Timing Explorer (rxte)Beklen, Elif 01 February 2004 (has links) (PDF)
In this thesis, RXTE observations of 4U 1907+09 are presented. Timing
analysis of these data sets have yielded quasi periodic oscillations
(QPOs) at orbital phases corresponding to the two flares in every orbital
period. Known continuous spin down trend and QPO behaviour at the
flares strongly suggest that a transient accretion disk occurs at the
flares. Our
findings strongly suggested that neutron star passes through the
equatorial wind of Be companion star. During these passages a
transient disk forms around Be neutron star.
|
892 |
Event-Oriented Dynamic Adaptation of Workflows: Model, Architecture and ImplementationMüller, Robert 28 November 2004 (has links) (PDF)
Workflow management is widely accepted as a core technology to support long-term business processes in heterogeneous and distributed environments. However, conventional workflow management systems do not provide sufficient flexibility support to cope with the broad range of failure situations that may occur during workflow execution. In particular, most systems do not allow to dynamically adapt a workflow due to a failure situation, e.g., to dynamically drop or insert execution steps. As a contribution to overcome these limitations, this dissertation introduces the agent-based workflow management system AgentWork. AgentWork supports the definition, the execution and, as its main contribution, the event-oriented and semi-automated dynamic adaptation of workflows. Two strategies for automatic workflow adaptation are provided. Predictive adaptation adapts workflow parts affected by a failure in advance (predictively), typically as soon as the failure is detected. This is advantageous in many situations and gives enough time to meet organizational constraints for adapted workflow parts. Reactive adaptation is typically performed when predictive adaptation is not possible. In this case, adaptation is performed when the affected workflow part is to be executed, e.g., before an activity is executed it is checked whether it is subject to a workflow adaptation such as dropping, postponement or replacement. In particular, the following contributions are provided by AgentWork: A Formal Model for Workflow Definition, Execution, and Estimation: In this context, AgentWork first provides an object-oriented workflow definition language. This language allows for the definition of a workflows control and data flow. Furthermore, a workflows cooperation with other workflows or workflow systems can be specified. Second, AgentWork provides a precise workflow execution model. This is necessary, as a running workflow usually is a complex collection of concurrent activities and data flow processes, and as failure situations and dynamic adaptations affect running workflows. Furthermore, mechanisms for the estimation of a workflows future execution behavior are provided. These mechanisms are of particular importance for predictive adaptation. Mechanisms for Determining and Processing Failure Events and Failure Actions: AgentWork provides mechanisms to decide whether an event constitutes a failure situation and what has to be done to cope with this failure. This is formally achieved by evaluating event-condition-action rules where the event-condition part describes under which condition an event has to be viewed as a failure event. The action part represents the necessary actions needed to cope with the failure. To support the temporal dimension of events and actions, this dissertation provides a novel event-condition-action model based on a temporal object-oriented logic. Mechanisms for the Adaptation of Affected Workflows: In case of failure situations it has to be decided how an affected workflow has to be dynamically adapted on the node and edge level. AgentWork provides a novel approach that combines the two principal strategies reactive adaptation and predictive adaptation. Depending on the context of the failure, the appropriate strategy is selected. Furthermore, control flow adaptation operators are provided which translate failure actions into structural control flow adaptations. Data flow operators adapt the data flow after a control flow adaptation, if necessary. Mechanisms for the Handling of Inter-Workflow Implications of Failure Situations: AgentWork provides novel mechanisms to decide whether a failure situation occurring to a workflow affects other workflows that communicate and cooperate with this workflow. In particular, AgentWork derives the temporal implications of a dynamic adaptation by estimating the duration that will be needed to process the changed workflow definition (in comparison with the original definition). Furthermore, qualitative implications of the dynamic change are determined. For this purpose, so-called quality measuring objects are introduced. All mechanisms provided by AgentWork include that users may interact during the failure handling process. In particular, the user has the possibility to reject or modify suggested workflow adaptations. A Prototypical Implementation: Finally, a prototypical Corba-based implementation of AgentWork is described. This implementation supports the integration of AgentWork into the distributed and heterogeneous environments of real-world organizations such as hospitals or insurance business enterprises.
|
893 |
A conceptual framework for information management : formation of a disciplineMiddleton, Michael Robert January 2007 (has links)
The aim of the research was to investigate the formation of the information management discipline, propose a framework by which it is presently understood, and test that framework within a particular area of application, namely the provision of scientific and technological information (STI) services.
The work is presented as a PhD by Publication which comprises a narrative that encompasses the series of published papers, and includes excerpts from the book written to illustrate the province of the discipline.
In thee book the disciplinary context is detailed and exemplified based upon information management domains. The book consolidates information management principles within a framework defined by these operational, analytical and administrative domains. It was created by a redaction of prior epistemological proposals; an analysis of the understanding of practice that has been shaped by professional, institutional and information science influences; and demonstration of practice within the domain framework.
The disciplinary framework was then used in a series of STI case studies where it was found to provide an effective description of information management. Together, the book and subsequent case studies provided illustration of the principles utilised in information management and the way that they are practiced within different domains, along with an explanation of the manner in which the information management discipline has been formed. These should assist with direction of future research and scholarship particularly with respect to factors relevant to information services and indicators for their successful application in future.
It is anticipated that this generalised description of the practices across the range of interpretations of information management should enable practicing information professionals to appreciate the relationship of their own work to disciplines that are converging towards similar purpose, such as through a clearer indication of the extent to which technical and management standards may be applied, and performance analysis undertaken.
Complementary outcomes that were achieved during the course of the work were: a comparative analysis of thesauri in the information field which shows that in this field, the ways that information professionals represent themselves remains unreconciled; an historical examination of Australian STI services that provides pointers to their effective continuation; and a reconsideration of the relationship between librarianship and information management.
The work is presented as a compilation of papers that comprise firstly extracts from the book to exemplify its consolidation of information management principles, then a number of published and submitted papers that examine how principles have been applied in practice. This is in the context of six case studies of Australian STI services including interviews with creators and developers, and analysis of historical information.
|
894 |
Prototyping a natural language interface to entity-relationship databases /Doroja, Gerry S. Unknown Date (has links)
Thesis (M App Sc in Computer Science)--University of South Australia, 1993
|
895 |
Effective and Efficient Similarity Search in Video DatabasesJie Shao Unknown Date (has links)
Searching relevant information based on content features in video databases is an interesting and challenging research topic that has drawn lots of attention recently. Video similarity search has many practical applications such as TV broadcast monitoring, copyright compliance enforcement and search result clustering, etc. However, existing studies are limited to provide fast and accurate solutions due to the diverse variations among the videos in large collections. In this thesis, we introduce the database support for effective and efficient video similarity search from various sources, even if there exists some transformation distortion, partial content re-ordering, insertion, deletion or replacement. Specifically, we focus on processing two different types of content-based queries: video clip retrieval in a large collection of segmented short videos, and video subsequence identification from a long unsegmented stream. The first part of the thesis investigates the problem of how to process a number of individual kNN searches on the same database simultaneously to reduce the computational overhead of current content-based video search systems. We propose a Dynamic Query Ordering (DQO) algorithm for efficiently processing Batch Nearest Neighbor (BNN) search in high-dimensional space, with advanced optimizations of both I/O cost and CPU cost. The second part of the thesis challenges an unstudied problem of temporal localization of similar content from a long unsegmented video sequence, with extension to identify the occurrence of potentially different ordering or length with respect to query due to video content editing. A graph transformation and matching approach supported by the above BNN search is proposed, as a filter-and-refine query processing strategy to effectively but still efficiently identify the most similar subsequence. The third part of the thesis extends the method of Bounded Coordinate System (BCS) we introduced earlier for video clip retrieval. A novel collective perspective of exploiting the distributional discrepancy of samples for assessing the similarity between two video clips is presented. Several ideas of non-parametric hypothesis tests in statistics are utilized to check the hypothesis whether two ensembles of points are from a same distribution. The proposed similarity measures can provide a more comprehensive analysis that captures the essence of invariant distribution information for retrieving video clips. For each part, we demonstrate comprehensive experimental evaluations, which show improved performance compared with state-of-the-art methods. In the end, some scheduled extensions of this work are highlighted as future research objectives.
|
896 |
Managing dynamic XML dataFisher, Damien Kaine, School of Computer Science & Engineering, UNSW January 2007 (has links)
Recent years have seen a surge in the popularity of XML, a markup language for representing semi-structured data. Some of this popularity can be attributed to the success that the semi-structured data model has had in environments where the relational data model has been insufficiently expressive. Concomitant with XMLs growing popularity, the world of database research has seen the rebirth of interest in tree-structured, hierarchical database systems. This thesis analyzes several problems that arise when constructing XML data management systems, particularly in the case where such systems must handle dynamic content. In the first chapter, we consider the problem of incremental schema validation, which arises in almost any XML database system. We build upon previous work by finding several classes of schemas for which very efficient algorithms exist. We also develop an algorithm that works for any schema, and prove that it is optimal. In the second chapter, we turn to the problem of improving query evaluation times on extremely large database systems. In particular, we boost the performance of the structural and twig joins, fundamental XML query evaluation techniques, through the use of an adaptive index. This index tunes itself to the query workload, providing a 20-80% boost in speed for these join operators. The adaptive nature of the index also allows updates to the database to be easily tracked. While accurate selectivity estimation is a critical problem in any database system due to its importance in choosing optimal query plans, there has been very little work on selectivity estimation in the presence of updates. We ask whether it is possible to design a structure for selectivity in XML databases that is updateable, and can return results with theoretically sound error guarantees. Through a combination of lower and upper bounds, we give strong evidence suggesting that this is unlikely in practice. Motivated by these results, we then develop a heuristic selectivity estimation structure for XML databases. This structure is the first such synopsis that can handle all aspects of core XPath, and is also updateable. Our experimental results demonstrate the efficacy of the approach.
|
897 |
Object-oriented simulation of chemical and biochemical processes / Damien Hocking.Hocking, Damien January 1997 (has links)
Bibliography: leaves 173-179. / xi, 221 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis aims to develop a basic object-oriented data structure and tools for the modelling and simulation of chemical and biochemical processes. The numerical methods are based on the Newton and Gear's Backward Difference methods. / Thesis (Ph.D.)--University of Adelaide, Dept. of Chemical Engineering, 1997
|
898 |
Privacy issues in health care and security of statistical databasesKing, Tatiana January 2008 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / Privacy of personal information is becoming a major problem in health care, in the light of coming implementation of electronic health record (EHR)systems. There is evidence of increasing public concern over privacy of personal health information that is to be stored in EHRs and widely used within the interconnected systems. The issues for the health care system include inadequate legislation for privacy in health care, as well as deficiency of effective technical and security measures. The work in the thesis is part of a larger project which aims to offer a comprehensive set of new techniques for protection of confidential individual's health data used for statistical purposes. The research strategy is to explore concerns about privacy in relation to legislation, attitudes to health care and technical protections in statistical databases. It comprised two different approaches: * content analysis of legal frameworks addressing protection of privacy in Australian health care, and * social research to explore privacy concerns in health care by Australians 18 years and over. This thesis presents a new multi-stage research to explore privacy concerns in health care raised by the development of EHR systems. Stage one involved 23 participants within four focus groups. Stage two was a national sample survey conducted with 700 respondents 18 years and over. The results of analysis are presented. They are compared with the results of other studies. The main findings of this thesis are: * revealing the main inadequacies in the Australian legal system for protecting privacy of health information in electronic health records; * determining characteristics of people who have concerns about the privacy of their health information; * identifying items of a health record which have to be protected and some reasons for that. The findings of the study will assist with the decision and solution for appropriate technical measures in statistical databases as well as issues of inadequacy in the existing privacy legislation. Furthermore, the work in this thesis confirmed a low awareness of public in relation to statistical use of personal health information and a low level of trust to automated systems of electronic health records which are initiated by the government. In conclusion, attitude towards privacy depends on individual's characteristics but also on existing legislation, public's awareness of this legislation,the means of resolving complaints, and awareness of technical means for privacy protection. Therefore, it is important to educate public in order for EHR system function to the full of its potential and the future innovations of information technology to strengthen health care and medical research.
|
899 |
Algorithms for building and evaluating multiple sequence alignments /Lassmann, Timo, January 2006 (has links)
Diss. (sammanfattning) Stockholm : Karolinska institutet, 2006. / Härtill 6 uppsatser.
|
900 |
Bibliometrics as a research assessment tool - impact beyond the impact factor /Lundberg, Jonas, January 2006 (has links)
Diss. (sammanfattning) Stockholm : Karol. inst., 2006. / Härtill 4 uppsatser.
|
Page generated in 0.0547 seconds