21 |
Work domain analysis and hydro power controller information needsBright, Lisa Unknown Date (has links)
No description available.
|
22 |
ProjectSnap addressing the project fragmentation problem /Brimhall, Erin Michael. January 2007 (has links)
Thesis (M.S.)--University of Montana, 2007. / Title from title screen. Description based on contents viewed Oct. 9, 2007. Includes bibliographical references (p. 59).
|
23 |
Information Management of Intelligent Electronic DevicesLo, Jinqin January 2013 (has links)
The advent of cheaper, more powerful substation relays, now commonly known as Intelligent Electronic Devices (IEDs), will have a substantial impact on Powerco information systems. Their impact is two-fold; the sheer volume of information associated with modern relays will require a more capable relay management system than the one currently being used. The second effect is the amount of data that can be gathered and used for network improvement. Improper management of both types of information will lead to worse outcomes for Powerco reliability performance, and ultimately its financial performance.
This report details two projects concerned with the management of IED information.The first project is concerned with a proposed upgrade of the Powerco relay management system. This involved requirements engineering, investigation of database systems and a commercial tendering process. It was found that the data schema had a large effect on efficiency and efficacy of relay management systems. The optimal solution would be to procure a proven relay management system from an established vendor, given resource constraints and proven-for-purpose data schema.
The second project is focused on the deployment of automated event report collection software. The impetus of this project was the inefficiencies that arise from manual collection and processing of network event data from substation relays. These hamper the network improvement process, negatively affecting Powerco network reliability. A Cost-Benefit Analysis was performed to determine if such functionality was worth deploying across the network. It was found that automated event report collection produced operational savings from reduced technician dispatch frequency. Other benefits include greater data captured, faster fault response time, and a transformation of the existing reactive network improvement engineering process to one that is proactive. This will require additional engineering resources to be dedicated to event analysis. A deployment schedule was formulated based on historical reliability and agreed service levels.
The larger implications of this report are that additional resourcing, and careful consideration of the information management processes are required to take advantage of the greater volume of data generated by IEDs.
|
24 |
Information management in production planning and control for virtual enterprisesZhou, Qu January 1999 (has links)
No description available.
|
25 |
Project information, office automation, and quality in building production process in Saudi ArabiaAl-Rugaib, Thamer A. January 1996 (has links)
No description available.
|
26 |
Sequential Data Mining and its Applications to PharmacovigilanceQin, Xiao 02 April 2019 (has links)
With the phenomenal growth of digital devices coupled with their ever-increasing capabilities of data generation and storage, sequential data is becoming more and more ubiquitous in a wide spectrum of application scenarios. There are various embodiments of sequential data such as temporal database, time series and text (word sequence) where the first one is synchronous over time and the latter two often generated in an asynchronous fashion. In order to derive precious insights, it is critical to learn and understand the behavior dynamics as well as the causality relationships across sequences. Pharmacovigilance is defined as the science and activities relating to the detection, assessment, understanding and prevention of adverse drug reactions (ADR) or other drug-related problems. In the post-marketing phase, the effectiveness and the safety of drugs is monitored by regulatory agencies known as post-marketing surveillance. Spontaneous Reporting System (SRS), e.g., U.S. Food and Drug Administration Adverse Event Reporting System (FAERS), collects drug safety complaints over time providing the key evidence to support regularity actions towards the reported products. With the rapid growth of the reporting volume and velocity, data mining techniques promise to be effective to facilitating drug safety reviewers performing supervision tasks in a timely fashion. My dissertation studies the problem of exploring, analyzing and modeling various types of sequential data within a typical SRS: Temporal Correlations Discovery and Exploration. SRS can be seen as a temporal database where each transaction encodes the co-occurrence of some reported drugs and observed ADRs in a time frame. Temporal association rule learning (TARL) has been proven to be a prime candidate to derive associations among the objects from such temporal database. However, TARL is parameterized and computational expensive making it difficult to use for discovering interesting association among drugs and ADRs in a timely fashion. Worse yet, existing interestingness measures fail to capture the significance of certain types of association in the context of pharmacovigilance, e.g. drug-drug interaction (DDI) related ADR. To discover DDI related ADR using TARL, we propose an interestingness measure that aligns with the DDI semantics. We propose an interactive temporal association analytics framework that supports real-time temporal association derivation and exploration. Anomaly Detection in Time Series. Abnormal reports may reveal meaningful ADR case which is overlooked by frequency-based data mining approach such as association rule learning where patterns are derived from frequently occurred events. In addition, the sense of abnormal or rareness may vary in different contexts. For example, an ADR, normally occurs to adult population, may rarely happen to youth population but with life threatening outcomes. Local outlier factor (LOF) is identified as a suitable approach to capture such local abnormal phenomenon. However, existing LOF algorithms and its variations fail to cope with high velocity data streams due to its high algorithmic complexity. We propose new local outlier semantics that leverage kernel density estimation (KDE) to effectively detect local outliers from streaming data. A strategy to continuously detect top-N KDE-based local outliers over streams is also designed, called KELOS -- the first linear time complexity streaming local outlier detection approach. Text Modeling. Language modeling (LM) is a fundamental problem in many natural language processing (NLP) tasks. LM is the development of probabilistic models that are able to predict the next word in the sequence given the words that precede it. Recently, LM is advanced by the success of the recurrent neural networks (RNNs) which overcome the Markov assumption made in the traditional statistical language models. In theory, RNNs such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) can “remember� arbitrarily long span of history if provided with enough capacity. However, they do not perform well on very long sequences in practice as the gradient computation for RNNs becomes increasingly ill-behaved as the expected dependency becomes longer. One way of tackling this problem is to feed succinct information that encodes the semantic structure of the entire document such as latent topics as context to guide the modeling process. Clinical narratives that describe complex medical events are often accompanied by meta-information such as a patient's demographics, diagnoses and medications. This structured information implicitly relates to the logical and semantic structure of the entire narrative, and thus affects vocabulary choices for the narrative composition. To leverage this meta-information, we propose a supervised topic compositional neural language model, called MeTRNN, that integrates the strength of supervised topic modeling in capturing global semantics with the capacity of contextual recurrent neural networks (RNN) in modeling local word dependencies.
|
27 |
Improving the management of informal engineering information through electronic logbooksMcAlpine, Hamish January 2010 (has links)
In modern organisations, the effective use of information and knowledge is a prerequisite for sustaining competitive advantage. To support this, considerable work has been undertaken by both academia and industry to improve the representation, organisation and access of information across and between organisations. This improved information and communication management has had a positive impact on business processes, improved decision making and – importantly – increased levels of innovation. However, one area that has received less attention is personal information such as logbooks, maintained by a majority of engineers. Many of these logbooks could contain significant amount of design information and knowledge which is not recorded elsewhere, such as design rationale and records of alternatives considered. Indeed, much of what we know about the work of historically important engineers and scientists such as Leonardo da Vinci (1452-1519) has come from their personal collections of notes. Despite this, logbooks are rarely formally managed, with the content usually only available to the authoring engineer. It is arguable that such potentially valuable information should be more easily accessible to the individual and wider organisation, where it could be of benefit as a personal productivity aid, a design record and also for intellectual property and audit/traceability purposes. It follows that there is a need to create improved methods for managing logbook content, which is the challenge that this research addresses. This research first seeks to establish the reasons for the stubborn persistence of paper logbooks as such a common feature of engineering work, despite almost every other aspect being undertaken in the digital domain. Various studies are undertaken, revealing that logbooks are used for a wide range of purposes, such as the critical role of ‘mediating’ between various information sources. The requirements arising from these studies are used together with a review of existing e-logbook technologies to produce a comprehensive requirement specification for an engineering electronic-logbook. At its core is a novel information management strategy based combination of templates to pre-structure entries and a lightweight information classification schema to structure the information. They key features of the specification are the ability for logbook information to be searched, shared and integrated with other information systems, whilst retaining important characteristics of paper logbooks (such as sketching support). The contributions of this thesis are the fundamental understanding of logbook use and content, together with a comprehensive specification for an engineering e-logbook to improve logbook information management for both the individual and wider organisation.
|
28 |
Improvements in the effectiveness of information management in construction organisationsSheriff, Abdullahi January 2011 (has links)
Managing information effectively is crucial for improving processes within organisations. To this end, a more holistic approach is required which aligns the organisations information and technology needs with its people, processes and business model. This should shift the emphasis from collaborating on individual projects to a long term view of how best to support an organisation s business strategy. It also shifts the emphasis from managing information (storing information to exploit it) to managing with information (harnessing information to gain competitive advantage across the organisation). This thesis is the culmination of a four year EngD research critically investigating improvements in the effectiveness of Information Management (IM) in construction industry based organisations. It begins with an introduction to the research, its aims and objectives, and then presents a detailed review of related literature about the subject matter and the methodology employed in conducting the research. The eight research tasks carried out using extensive, varied and appropriate methods including literature reviews, case studies, interviews, surveys and workshops are then explained based upon which a number of key recommendations are drawn. The findings define IM as distinct from Information Technology (IT) and Information Systems (IS) and provide a critical insight into IM in construction industry based organisations including its strategic drivers, key influencers and barriers. It further highlights the four components of a holistic approach to IM as: The Content Model - the nature and structure of the content including metadata; The Enterprise Model - operational structure of the company including its business processes and information dependent activities; Systems and Technology - the most appropriate technological solutions in light of the defined content and enterprise models; and Implementation and Change Management - support for people and managing the transition into improved ways of working. The findings show that a clear alignment between an organisations information needs and its operational processes is key to developing and implementing an effective IM strategy. It shows that a unified approach is more appropriate for organisations than a single approach as certain multi-dimensional context specific factors exist which influence the nature of each organisations needs and therefore most suitable solutions to address them. The findings also demonstrate the importance of metadata in integrating organisations and enabling a holistic IM Strategy. A standard IM framework is developed and validated which provides a structured and systematic approach for organisations seeking to develop suitable metadata standards to meet their needs. The holistic approach applied to a number of case studies provides novel insights into the use of metadata in information retrieval; the development of aligned information architecture and the creation of holistic strategies for improving knowledge sharing and innovation. The thesis concludes with a critical review of the research and a number of key recommendations for organisations, IM research and the wider construction industry. All the findings and insights have been and continue to be disseminated through a number of peer reviewed publications.
|
29 |
Introduktion i regleroptimeringOlsson, Camilla January 2003 (has links)
No description available.
|
30 |
How People Re-find Information When the Web ChangesTeevan, Jaime 18 June 2004 (has links)
This paper investigates how people return to information in a dynamic information environment. For example, a person might want to return to Web content via a link encountered earlier on a Web page, only to learn that the link has since been removed. Changes can benefit users by providing new information, but they hinder returning to previously viewed information. The observational study presented here analyzed instances, collected via a Web search, where people expressed difficulty re-finding information because of changes to the information or its environment. A number of interesting observations arose from this analysis, including that the path originally taken to get to the information target appeared important in its re-retrieval, whereas, surprisingly, the temporal aspects of when the information was seen before were not. While people expressed frustration when problems arose, an explanation of why the change had occurred was often sufficient to allay that frustration, even in the absence of a solution. The implications of these observations for systems that support re-finding in dynamic environments are discussed.
|
Page generated in 0.042 seconds