451 |
A framework in support of structural monitoring by real time kinematic GPS and multisensor dataOgaja, Clement, Surveying & Spatial Information Systems, Faculty of Engineering, UNSW January 2002 (has links)
Due to structural damages from earthquakes and strong winds, engineers and scientists have focused on performance based design methods and sensors directly measuring relative displacements. Among the monitoring methods being considered include those using Global Positioning System (GPS) technology. However, as the technical feasibility of using GPS for recording relative displacements has been (and is still being) proven, the challenge for users is to determine how to make use of the relative displacements being recorded. This thesis proposes a mathematical framework that supports the use of RTK-GPS and multisensor data for structural monitoring. Its main contributions are as follows: (a) Most of the emerging GPS-based structural monitoring systems consist of GPS receiver arrays (dozens or hundreds deployed on a structure), and the issue of integrity of the GPS data generated must be addressed for such systems. Based on this recognition, a methodology for integrity monitoring using a data redundancy approach has been proposed and tested for a multi-antenna measurement environment. The benefit of this approach is that it verifies the reliability of both the measuring instruments and the processed data contrary to the existing methods that only verifies the reliability of the processed data. (b) For real-time structural monitoring applications, high frequency data ought to be generated. A methodology that can extract, in real-time, deformation parameters from high frequency RTK measurements is proposed. The methodology is tested and shown to be effective for determining the amplitude and frequency of structural dynamics. Thus, it is suitable for the dynamic monitoring of towers, tall buildings and long span suspension bridges. (c) In the overall effort of deformation analysis, large quantities of observations are required, both of causative phenomena (e.g., wind velocity, temperature, pressure), and of response effects (e.g., accelerations, coordinate displacements, tilt, strain, etc.). One of the problems to be circumvented is that of dealing with excess data generated both due to process automation and the large number of instruments employed. This research proposes a methodology based on multivariate statistical process control whose benefit is that excess data generated on-line is reduced, while maintaining a timely response analysis of the GPS data (since they can give direct coordinate results). Based on the above contributions, a demonstrator software system was designed and implemented for the Windows operating system. Tests of the system with datasets from UNSW experiments, the Calgary Tower monitoring experiment in Canada, the Xiamen Bank Building monitoring experiment in China, and the Republic Plaza Building monitoring experiment in Singapore, have shown good results.
|
452 |
Insights into gene interactions using computational methods for literature and sequence resourcesDameh, Mustafa, n/a January 2008 (has links)
At the beginning of this century many sequencing projects were finalised. As a result, overwhelming amount of literature and sequence data have been available to biologist via online bioinformatics databases. This biological data lead to better understanding of many organisms and have helped identify genes. However, there is still much to learn about the functions and interactions of genes.
This thesis is concerned with predicting gene interactions using two main online resources: biomedical literature and sequence data. The biomedical literature is used to explore and refine a text mining method, known as the "co-occurrence method", which is used to predict gene interactions. The sequence data are used in an analysis to predict an upper bound of the number of genes involved in gene interactions.
The co-occurrence method of text mining was extensively explored in this thesis. The effects of certain computational parameters on influencing the relevancy of documents in which two genes co-occur were critically examined. The results showed that indeed some computational parameters do have an impact on the outcome of the co-occurrence method, and if taken into consideration, can lead to better identification of documents that describe gene interactions. To explore the co-occurrence method of text mining, a prototype system was developed, and as a result, it contains unique functions that are not present in currently available text mining systems.
Sequence data were used to predict the upper bound of the number of genes involved in gene interactions within a tissue. A novel approach was undertaken that used an analysis of SAGE and EST sequence libraries using ecological estimation methods. The approach proves that the species accumulation theory used in ecology can be applied to tag libraries (SAGE or EST) to predict an upper bound to the number of mRNA transcript species in a tissue.
The novel computational analysis provided in this study can be used to extend the body of knowledge and insights relating to gene interactions and, hence, provide better understanding of genes and their functions.
|
453 |
The effects of informal computer keyboarding on straight copy speed and accuracy /Burke, Janice B., January 1988 (has links)
Thesis (M.S.)--Virginia Polytechnic Institute and State University, 1988. / Vita. Abstract. Includes bibliographical references (leaves 33-34). Also available via the Internet.
|
454 |
Data mining algorithms for genomic analysisAo, Sio-iong. January 2007 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
|
455 |
Computers in dentistry : a general review of computer applications in dentistry and a report on an experimental computer-based dental records systemHunt, Diane Rosemary. January 1972 (has links) (PDF)
No description available.
|
456 |
External Data Incorporation into Data WarehousesStrand, Mattias January 2005 (has links)
<p>Most organizations are exposed to increasing competition and must be able to orient themselves in their environment. Therefore, they need comprehensive systems that are able to present a holistic view of the organization and its business. A data warehouse (DW) may support such tasks, due to its abilities to integrate and aggregate data from organizationally internal, as well as external sources and present the data in formats that support strategic and tactical decision-makers.</p><p>Traditionally, DW development projects have focused on data originating from internal systems, whereas the benefits of data acquired external to the organization, i.e. external data, have been neglected. However, as it has become increasingly important to keep track of the competitive forces influencing an organization, external data is gaining more attention. Still, organizations are experiencing problems when incorporating external data and these hinder the organizations from exploiting the potential of external data and prevent them to achieving return on their investments. In addition, current literature fails to assist organizations in avoiding or solving common problems.</p><p>Therefore, in order to support organizations in their external data incorporation initiatives, a set of guidelines have been developed and contextualized. The guidelines are also complemented with a state of practice description, as a means of taking one step towards a cohesive body of knowledge regarding external data incorporation into DWs. The development of the guidelines, as well as the establishment of a state of practice description, was based upon the material from two literature reviews and four interview studies. The interview studies were conducted with the most important stakeholders when incorporating external data, i.e. the user organizations (2 studies), the DW consultants, and the suppliers of the external data. Additionally, in order to further ground the guidelines, interviews with a second set of DW consultants were conducted.</p>
|
457 |
An ontology-based system for representation and diagnosis of electrocardiogram (ECG) dataDendamrongvit, Thidarat 21 February 2006 (has links)
Electrocardiogram (ECG) data are stored and analyzed in different formats,
devices, and computer platforms. There is a need to have an independent platform
to support ECG processes among different resources for the purposes of improving
the quality of health care and proliferating the results from research. Currently,
ECG devices are proprietary. Devices from different manufacturers cannot
communicate with each other. It is crucial to have an open standard to manage
ECG data for representation and diagnosis.
This research explores methods for representation and diagnosis of ECG by
developing an Ontology for shared ECG data based on the Health Level Seven
(HL7) standard. The developed Ontology bridges the conceptual gap by
integrating ECG waveform data, HL7 standard data descriptions, and cardiac
diagnosis rules. The Ontology is encoded in Extensible Markup Language (XML)
providing human and machine readable format. Thus, the interoperability issue is
resolved and ECG data can be shared among different ECG devices and systems.
This developed Ontology also provides a mechanism for diagnostic decision
support through an automated ECG diagnosis system for a medical technician or
physician in the diagnosis of cardiac disease. An experiment was conducted to
validate the interoperability of the Ontology, and also to assess the accuracy of the
diagnosis model provided through the Ontology. Results showed 100%
interoperability from ECG data provided through eight different databases, and a
93% accuracy in diagnosis of normal and abnormal cardiac conditions. / Graduation date: 2006
|
458 |
Estimating absenceKincaid, Thomas M. 25 November 1997 (has links)
The problem addressed is absence of a class of objects in a finite set of objects, which
is investigated by considering absence of a species and absence in relation to a threshold.
Regarding absence of a species, we demonstrate that the assessed probability of absence
of the class of objects in the finite set of objects given absence of the class in the sample
is either exactly or approximately equal to the probability of observing a specific single
object from the class of objects given the protocol for observation, where probability is
interpreted as a degree of belief. Regarding absence in relation to a threshold, we
develop a new estimator of the upper confidence bound for the finite population
distribution function evaluated at the threshold and investigate its properties for a set of
finite populations. In addition we show that estimation regarding the initial ordered value
in the finite population has limited usefulness. / Graduation date: 1998
|
459 |
Computer aided tolerance analysis and process selection for AutoCADPamulapati, Sairam V. 25 February 1997 (has links)
The fundamental objective of a design engineer in performing tolerance technology is to transform functional requirements into tolerances on individual parts based on existing data and algorithms for design tolerance analysis and synthesis. The transformation of functional requirements into tolerances must also consider the existing process capabilities and manufacturing costs to determine the optimal tolerances and processes.
The main objective of this research is to present an integrated but modular system for Computer Aided Tolerance Allocation, Tolerance Synthesis and Process Selection. The module is implemented in AutoCAD using the ARX 1.1 (AutoCAD Runtime Extension Libraries), MFC 4.2, Visual C++ 4.2, Access 7.0, AutoCAD Development System, AutoLISP, and Other AutoCAD Customization tools.
The integrated module has two functions:
a. Tolerance analysis and allocation: This module uses several statistical and optimization techniques to aggregate component tolerances. Random number generators are used to simulate historical data used by most of the optimization techniques to
perform tolerance analysis. Various component tolerance distributions are considered
(Beta, Normal, and Uniform). The proposed analysis technique takes into consideration
the distribution of each fabrication of the component, this provides designers . The
proposed tolerance analysis method takes into consideration the distribution of each
fabrication process of the assembly. For assemblies with non-normal natural process
tolerance distributions, this method allows designers to assign assembly tolerances that
are closer to actual assembly tolerances when compared to other statistical methods. This
is verified by comparing the proposed tolerance analysis method to the results of Monte
Carlo simulations. The method results in assembly tolerances similar to those provided
by Monte Carlo simulation yet is significantly less computationally-intensive.
b. Process Selection: This thesis introduces a methodology for concurrent design that considers the allocation of tolerances and manufacturing processes for minimum cost. This methodology brings manufacturing concerns into the design process. A simulated annealing technique is used to solve the optimization problem. Independent, unordered, manufacturing processes are assumed for each assembly. The optimization technique uses Monte Carlo simulation. A simulated annealing technique is used to control the Monte Carlo analysis. In this optimization technique, tolerances are allocated using the cost-tolerance curves for each of the individual components. A cost-tolerance curve is defined for each component part in the assembly. The optimization algorithm varies the tolerance for each component and searches systematically for the combination of tolerances that minimizes the cost. The proposed tolerance allocation/process selection method was found to be superior to other tolerance allocation methods based on manufacturing costs. / Graduation date: 1997
|
460 |
Design and analysis of hard real-time systemsZhu, Jiang 16 November 1993 (has links)
First, we study hard real-time scheduling problems where each task is defined
by a four tuple (r, c, p, d): r being its release time, c computation time, p
period, and d deadline. The question is whether all tasks can meet their
deadlines on one processor. If not, how many processors are needed?
For the one-processor problem, we prove two sufficient conditions for a
(restricted) periodic task set to meet deadlines. The two conditions can be
applied to both preemptive and non-preemptive scheduling, in sharp contrast
to earlier results. If a periodic task set can meet deadlines under any algorithm
which does not idle the processor as long as there are tasks ready to execute, it
must satisfy our second condition. We also prove a necessary condition for a
periodic task set to meet deadlines under any scheduling algorithm.
We present a method for transforming a sporadic task to an equivalent
periodic task. The transformation method is optimal with respect to non-preemptive
scheduling. With this method, all results on scheduling periodic
task sets can be applied to sets of both periodic and sporadic tasks.
For the scheduling problem in distributed memory systems, we propose
various heuristic algorithms which try to use as few processors as possible to
meet deadlines. Although our algorithms are non-preemptive, our simulation
results show that they can outperform the heuristic algorithms based on the
famous preemptive rate monotonic algorithm in terms of the number of used
processors and processor utilization rate.
Second, we describe a hard real-time software development environment,
called HaRTS, which consists of a design tool and a scheduling tool. The design
tool supports a hierarchical design diagram which combines the control and
data flow of a hard real-time application. The design diagram is quite intuitive,
and yet it can be automatically translated into Ada��� code and analyzed for
scheduleability. The scheduling tool schedules precedence-constrained
periodic task sets and simulates the task execution with highly animated user
interfaces, which goes beyond the traditional way of examining a schedule as
a static Gantt chart. / Graduation date: 1994
|
Page generated in 0.0704 seconds