Spelling suggestions: "subject:"280100 forminformation lemsystems"" "subject:"280100 forminformation atemsystems""
1 |
Multiple device WAP based information systems : a set of development guidelinesMetter, M. P. Unknown Date (has links)
No description available.
|
2 |
Synthesising Web Search Queries from Example Text DocumentsPatro, S Unknown Date (has links) (PDF)
The huge number of documents available on the Web represents a challenge for the information retrieval (IR) systems. The explosive growth in the number of documents published on the Web has made the web search engines the main means of initiating the interaction with the Internet. There are many good search engines, but users are often not satisfied with the results provided by the search engines. In many cases, the answers returned by the search engines are not relevant to the user information need, forcing the user to manually sift through the long list to locate the desired documents. Often, when using a search engine the user needs to repeatedly refine their query as they do not have enough domain knowledge to formulate the query precisely. Although the average users know what kind of information they want, it becomes difficult for them to translate it to the search engine in an effective way so that the search engine understands the user needs. The specification of such a query is limited by the user's vocabulary and knowledge of the search domain. Even when disjunctions or conjunctions of keywords are chosen as the way of expressing the search goal, as existing search engines do, the user may not know what set of keywords they should use to define the collection of the desired documents precisely. Good query formulation requires that a user can somehow predict which terms appear in documents relevant to the information need. Accurate term prediction requires extensive knowledge about the document collection. Such knowledge may be hard to obtain, especially in large document collections. In the field of information retrieval, it has been recognised that, although users had difficulty expressing exactly the information that they require, they could judge the retrieved documents as relevant or irrelevant based on their information need. This lead to the notion of Relevance feedback: users marking documents as relevant to their needs and presenting this information to the information retrieval system. The system can use this information to retrieve more documents like the relevant ones by a process known as Query expansion. This research explores the use of relevance feedback techniques to automatically discover related words to a query from the contents of the user-identified relevant documents. With these set of words it gives an algorithm to synthesise the user query in the form of a Boolean expression. The basic idea is that, a synthesised query providing a richer representation of the user's query would increase the number of relevant documents retrieved when used as a query to a search engine. The three objectives for the algorithm are to ensure that the synthesised query has good recall, good precision and not least, is of a form and size acceptable to the intended search engine. The query synthesis algorithm starts by imposing a task in the form of a first-cut search query to a search engine. The outcome from the search engine is displayed in terms of a set of documents. Considering that the documents found on the Web being text documents, the user would attribute the documents as Relevant or Irrelevant based on their information needs. From these two sets of documents the algorithm creates a Boolean search query in the following five steps: 1. The Boolean query construction begins with the construction of a CNF (Conjunctive Normal Form) Boolean expression of terms that selects every document in set Relevant and rejects every document in set Irrelevant. However the expressions so constructed are often too large to be acceptable to a search engine. 2. The CNF expression is transformed into equivalent DNF (Disjunctive Normal Form) expression. Redundant minterms are removed from further consideration and the set of non redundant minterms are referred as Mset. 3. A Boolean expression Query is constructed by selecting minterms from Mset.The goal is to select a small set of minterms that selects each document in set Relevant. The constructed query is then written in a form suitable to the search engine. 4. The process stops if the Query is acceptable to the search engine. Boolean expression Query becomes the required synthesised query. Otherwise, the Query needs modification in step 5. 5. Minterms are modified to create a new minterm set Mset and the process repeats again from step3. In this research, Google is used as the prime example of a search engine because of its popularity and cached link features on the Web. To confirm the success of the proposed query synthesis algorithm, a survey was organised with day to day users of a general purpose search engine like Google. To conduct the survey, a list of topics in diverse domains was chosen to collect data from the Web and a set of queries were generated by applying the proposed algorithm on these data sets. The participants were then asked to create queries for these host topics consistent with the information need. No constraint was placed on them regarding the time, number of tries or quality of their query. The target was to compare the quality of the human generated queries with the synthesised queries using evaluation measures known as precision, coverage and their combination called F1 measure. The traditional precision and coverage measurements collected during the survey show that the synthesised queries overwhelmingly perform better than the user queries. F1 measure is employed as the main evaluation metric as it combines both precision and recall into a single metric and favour a balanced performance of the two metrics. It resolves the anomalous situations, where a query with large coverage but low precision may not be considered as satisfactory as one with a modest coverage but high precision. The number of relevant documents among the first 10 and 20 retrieved links is used as a measure of the precision. Due to the difficulties of calculating recall, a new measure called coverage is used. The result shows that the synthesised queries can provide better values of F1 measure than the queries generated from a user's best effort. Higher values in F1 measure indicates the high values of precision and coverage obtained by the synthesised queries. Besides achieving the above goals, the proposed algorithm is able to synthesise queries in a form and size acceptable to the search engine. To verify whether the outcome of the survey is not resulted by chance a statistical procedure known as paired t-test is applied on the data obtained from the survey. The results of this test suggest that the synthesised queries provide better results when compared to human generated queries, which is statistically significant (P-value<0.00001). The data obtained from the user survey has also been used to provide insights into the quality of human queries as function of its syntactical and other characteristics.
|
3 |
Integrity Analysis and Coercion in Distributed SystemsHepburn, M Unknown Date (has links) (PDF)
This thesis presents a new approach to modelling the security and integrity of data in distributed and ad-hoc networks of processes. An annotated type based analysis is introduced which ensures that no contamination will occur between data considered trustworthy and data that may have been corrupted. A method of performing safe run-time coercion of security properties of data is also presented. This is novel because it enables users to perform run-time coercions of data in a manner that may be statically proven safe. Both plain networks and dynamic (agent-based) networks are considered. These are modelled as systems of first-order and higher-order pi-calculus, respectively. The higher-order system examined introduces a new notion of trustworthiness dependent on the context in which it is typed or executed. This allows programs with malicious intent to be safely executed when it can be demonstrated that no possibility for interaction with other programs, including the host, is possible. A concept of execution context is introduced to perform this analysis. In addition, annotated type systems with and without sub-typing are described, and sub ject reduction is shown to hold for all systems considered. Implementation of the method is demonstrated via type-inference algorithms, and these are shown to be both sound and complete for all systems.
|
4 |
A methodology for business processes identification: developing instruments for an effective enterprise system projectBerkowitz, Zeev January 2006 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / Since the mid 1990s, thousands of companies around the world have implemented Enterprise Systems (ES), which are considered to be the most important development in the corporate use of information technology. By providing computerized support to business processes spanning both the enterprise and the supply chain, these systems have become an indispensable tool utilized by organizations to accomplish and maintain efficient and effective operational performance. However, there are many cases in which ES implementation has failed in terms of the required time and budget, and more importantly, in terms of functionality and performance. One of the main causes of these failures is the misidentification and improper selection of business processes to be implemented into the ES, which are a crucial element of the system's implementation life cycle. In order to achieve effective implementation, a ‘necessary and sufficient’ set of business processes must be designed and implemented. Implementing an excessive set of business processes is costly; yet implementing an insufficient set is ruinous. The heuristic identification of the set of business processes, based on requirement elicitation, is flawed; there is no guarantee that all the necessary processes have been captured (Type I error), and/or that superfluous processes have been selected for implementation (Type II error). The existing implementation methods do not include a methodology to address this vital issue. This thesis aims to resolve this problem and to provide a methodology that will generate a necessary and sufficient set of business processes in a given organization, based on its specific characteristics, which will be used as a baseline for implementing an ES. A proper definition of the business processes and their associated properties is proposed and detailed. The properties are then used as parameters to generate the complete set of all the possible business processes in the organization; from this set, necessary and sufficient processes are selected. The methodology exposes the fundamental level of business processes, which are then used as a baseline for further phases in the implementation process. The proposed methodology has been tested through the analysis of companies that have implemented ES. In each of these cases, the identification of business processes utilizing the proposed methodology has proven to provide superior results to those obtained through all other implemented practices, producing a better approximation of their existing business processes.
|
5 |
Cognitive Support during Object-Oriented Software Development: The Case of UML DiagramsCostain, Gay January 2008 (has links)
The Object Management Group (OMG) accepted the Unified Modelling Language (UML) as a standard in 1997, yet there is sparse empirical evidence to justify its choice. This research aimed to address that lack by investigating the modification of programs for which external representations, drawn using the UML notations most commonly used in industry, were provided. The aim of the research was to discover if diagrams using those UML notations provided the modifying programmer with cognitive support. The application of the use of modelling to assist program modification was chosen as a result of interviews that were carried out in New Zealand and North America to discover how workers in the software industry used modelling, and if so, whether UML notation satisfied their needs. The most preferred UML diagrams were identified from the interviews. A framework of modelling use in software development was derived. A longitudinal study at a Seattle-based company was the source that suggested that program modification should be investigated. The methodology chosen for the research required subjects to modify two non-trivial programs, one of which was supplied with UML documentation. There were two aspects to the methodology. First, the subjects’ performances with and without the aid of UML documentation were compared. Modifying a program is an exercise in problem solving which is a cognitive activity. If the use of UML improved subjects’ performances then it could be said that the UML had aided the subjects’ cognition. Second, concurrent verbal protocols were collected whilst the subjects modified the programs. The protocols for the modification with UML documentation, for ten of the more successful subjects, were transcribed and analysed according to a framework derived from the literature. The framework listed the possible cognitive steps involved in problem solving where cognition could be distributed to and from external representations. The categories of evidence that would confirm cognitive support were also derived from the literature. The experiments confirmed that programmers from similar backgrounds varied widely in ability and style. Twenty programmers modified both an invoice application and a diary application. There was some indication that the UML diagrams aided performance. The analyses of all ten of the transcribed subjects showed evidence of UML cognitive support.
|
6 |
A methodology for business processes identification: developing instruments for an effective enterprise system projectBerkowitz, Zeev January 2006 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / Since the mid 1990s, thousands of companies around the world have implemented Enterprise Systems (ES), which are considered to be the most important development in the corporate use of information technology. By providing computerized support to business processes spanning both the enterprise and the supply chain, these systems have become an indispensable tool utilized by organizations to accomplish and maintain efficient and effective operational performance. However, there are many cases in which ES implementation has failed in terms of the required time and budget, and more importantly, in terms of functionality and performance. One of the main causes of these failures is the misidentification and improper selection of business processes to be implemented into the ES, which are a crucial element of the system's implementation life cycle. In order to achieve effective implementation, a ‘necessary and sufficient’ set of business processes must be designed and implemented. Implementing an excessive set of business processes is costly; yet implementing an insufficient set is ruinous. The heuristic identification of the set of business processes, based on requirement elicitation, is flawed; there is no guarantee that all the necessary processes have been captured (Type I error), and/or that superfluous processes have been selected for implementation (Type II error). The existing implementation methods do not include a methodology to address this vital issue. This thesis aims to resolve this problem and to provide a methodology that will generate a necessary and sufficient set of business processes in a given organization, based on its specific characteristics, which will be used as a baseline for implementing an ES. A proper definition of the business processes and their associated properties is proposed and detailed. The properties are then used as parameters to generate the complete set of all the possible business processes in the organization; from this set, necessary and sufficient processes are selected. The methodology exposes the fundamental level of business processes, which are then used as a baseline for further phases in the implementation process. The proposed methodology has been tested through the analysis of companies that have implemented ES. In each of these cases, the identification of business processes utilizing the proposed methodology has proven to provide superior results to those obtained through all other implemented practices, producing a better approximation of their existing business processes.
|
7 |
Multi-Vendor System Network Management: A Roadmap for CoexistenceGutierrez, Jairo A. January 1997 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / As computer networks become more complex, and more heterogeneous (often involving systems from multiple vendors), the importance of integrated network management increases. This thesis summarises the efforts of research carried out 1 ) to identify the characteristics and requirements of an Integrated Network Management Environment (INME) and its individual components, 2) to propose a model to represent the INME, 3) to demonstrate the validity of the model, 4) to describe the steps needed to formally specify the model, and 5) to suggest an implementation plan for the INME. One of the key aspects of this thesis is the introduction of three different and complementary models used to integrate the emerging OSI management standards with the proven-and-tried network management solutions promoted by the Internet Activities Board. The Protocol-Oriented Network Management Model is used to represent the existing network management supported by the INME: ie, OSI and Internet-based systems. The Element-Oriented Network Management Model represents the components that are used within individual network systems. It describes the managed objects, and the platform application program interfaces (APIs). This model also includes the translation mechanisms needed to support the interaction between OSI managers and Internet agents. The Interoperability Model is used to represent the underlying communications infrastructure supporting network management. The communications between agents and managers is represented with this model by using the required protocol stacks (OSI or TCP/IP), and by depicting the interconnection between the entities using the network management functions. This three-pronged classification provides a richer level of abstraction facilitating the coexistence of the standard network management systems, allowing different levels of modeling. complexity, and improving the access to managed objects. The ultimate goal of this thesis is to describe a framework that assists developers of network management applications in the process of integrating their solutions to an open systems network management platform. This framework will also help network managers to minimise the risks involved in the transition from first generation network management systems to more integrated alternatives as they become available.
|
8 |
Cognitive Support during Object-Oriented Software Development: The Case of UML DiagramsCostain, Gay January 2008 (has links)
The Object Management Group (OMG) accepted the Unified Modelling Language (UML) as a standard in 1997, yet there is sparse empirical evidence to justify its choice. This research aimed to address that lack by investigating the modification of programs for which external representations, drawn using the UML notations most commonly used in industry, were provided. The aim of the research was to discover if diagrams using those UML notations provided the modifying programmer with cognitive support. The application of the use of modelling to assist program modification was chosen as a result of interviews that were carried out in New Zealand and North America to discover how workers in the software industry used modelling, and if so, whether UML notation satisfied their needs. The most preferred UML diagrams were identified from the interviews. A framework of modelling use in software development was derived. A longitudinal study at a Seattle-based company was the source that suggested that program modification should be investigated. The methodology chosen for the research required subjects to modify two non-trivial programs, one of which was supplied with UML documentation. There were two aspects to the methodology. First, the subjects’ performances with and without the aid of UML documentation were compared. Modifying a program is an exercise in problem solving which is a cognitive activity. If the use of UML improved subjects’ performances then it could be said that the UML had aided the subjects’ cognition. Second, concurrent verbal protocols were collected whilst the subjects modified the programs. The protocols for the modification with UML documentation, for ten of the more successful subjects, were transcribed and analysed according to a framework derived from the literature. The framework listed the possible cognitive steps involved in problem solving where cognition could be distributed to and from external representations. The categories of evidence that would confirm cognitive support were also derived from the literature. The experiments confirmed that programmers from similar backgrounds varied widely in ability and style. Twenty programmers modified both an invoice application and a diary application. There was some indication that the UML diagrams aided performance. The analyses of all ten of the transcribed subjects showed evidence of UML cognitive support.
|
9 |
A methodology for business processes identification: developing instruments for an effective enterprise system projectBerkowitz, Zeev January 2006 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / Since the mid 1990s, thousands of companies around the world have implemented Enterprise Systems (ES), which are considered to be the most important development in the corporate use of information technology. By providing computerized support to business processes spanning both the enterprise and the supply chain, these systems have become an indispensable tool utilized by organizations to accomplish and maintain efficient and effective operational performance. However, there are many cases in which ES implementation has failed in terms of the required time and budget, and more importantly, in terms of functionality and performance. One of the main causes of these failures is the misidentification and improper selection of business processes to be implemented into the ES, which are a crucial element of the system's implementation life cycle. In order to achieve effective implementation, a ‘necessary and sufficient’ set of business processes must be designed and implemented. Implementing an excessive set of business processes is costly; yet implementing an insufficient set is ruinous. The heuristic identification of the set of business processes, based on requirement elicitation, is flawed; there is no guarantee that all the necessary processes have been captured (Type I error), and/or that superfluous processes have been selected for implementation (Type II error). The existing implementation methods do not include a methodology to address this vital issue. This thesis aims to resolve this problem and to provide a methodology that will generate a necessary and sufficient set of business processes in a given organization, based on its specific characteristics, which will be used as a baseline for implementing an ES. A proper definition of the business processes and their associated properties is proposed and detailed. The properties are then used as parameters to generate the complete set of all the possible business processes in the organization; from this set, necessary and sufficient processes are selected. The methodology exposes the fundamental level of business processes, which are then used as a baseline for further phases in the implementation process. The proposed methodology has been tested through the analysis of companies that have implemented ES. In each of these cases, the identification of business processes utilizing the proposed methodology has proven to provide superior results to those obtained through all other implemented practices, producing a better approximation of their existing business processes.
|
10 |
Multi-Vendor System Network Management: A Roadmap for CoexistenceGutierrez, Jairo A. January 1997 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / As computer networks become more complex, and more heterogeneous (often involving systems from multiple vendors), the importance of integrated network management increases. This thesis summarises the efforts of research carried out 1 ) to identify the characteristics and requirements of an Integrated Network Management Environment (INME) and its individual components, 2) to propose a model to represent the INME, 3) to demonstrate the validity of the model, 4) to describe the steps needed to formally specify the model, and 5) to suggest an implementation plan for the INME. One of the key aspects of this thesis is the introduction of three different and complementary models used to integrate the emerging OSI management standards with the proven-and-tried network management solutions promoted by the Internet Activities Board. The Protocol-Oriented Network Management Model is used to represent the existing network management supported by the INME: ie, OSI and Internet-based systems. The Element-Oriented Network Management Model represents the components that are used within individual network systems. It describes the managed objects, and the platform application program interfaces (APIs). This model also includes the translation mechanisms needed to support the interaction between OSI managers and Internet agents. The Interoperability Model is used to represent the underlying communications infrastructure supporting network management. The communications between agents and managers is represented with this model by using the required protocol stacks (OSI or TCP/IP), and by depicting the interconnection between the entities using the network management functions. This three-pronged classification provides a richer level of abstraction facilitating the coexistence of the standard network management systems, allowing different levels of modeling. complexity, and improving the access to managed objects. The ultimate goal of this thesis is to describe a framework that assists developers of network management applications in the process of integrating their solutions to an open systems network management platform. This framework will also help network managers to minimise the risks involved in the transition from first generation network management systems to more integrated alternatives as they become available.
|
Page generated in 0.1334 seconds