• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 28
  • 18
  • 12
  • 11
  • 8
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 45
  • 38
  • 32
  • 30
  • 29
  • 22
  • 22
  • 21
  • 18
  • 18
  • 17
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Dynamic update for operating systems

Baumann, Andrew, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Patches to modern operating systems, including bug fixes and security updates, and the reboots and downtime they require, cause tremendous problems for system users and administrators. The aim of this research is to develop a model for dynamic update of operating systems, allowing a system to be patched without the need for a reboot or other service interruption. In this work, a model for dynamic update based on operating system modularity is developed and evaluated using a prototype implementation for the K42 operating system. The prototype is able to update kernel code and data structures, even when the interfaces between kernel modules change. When applying an update, at no point is the system's entire execution blocked, and there is no additional overhead after an update has been applied. The base runtime overhead is also very low. An analysis of the K42 revision history shows that approximately 79% of past performance and bug-fix changes to K42 could be converted to dynamic updates, and the proportion would be even higher if the changes were being developed for dynamic update. The model also extends to other systems such as Linux and BSD, that although structured modularly, are not strictly object-oriented like K42. The experience with this approach shows that dynamic update for operating systems is feasible given a sufficiently-modular system structure, allows maintenance patches and updates to be applied without disruption, and need not constrain system performance.
22

Efficient Index Maintenance for Text Databases

Lester, Nicholas, nml@cs.rmit.edu.au January 2006 (has links)
All practical text search systems use inverted indexes to quickly resolve user queries. Offline index construction algorithms, where queries are not accepted during construction, have been the subject of much prior research. As a result, current techniques can invert virtually unlimited amounts of text in limited main memory, making efficient use of both time and disk space. However, these algorithms assume that the collection does not change during the use of the index. This thesis examines the task of index maintenance, the problem of adapting an inverted index to reflect changes in the collection it describes. Existing approaches to index maintenance are discussed, including proposed optimisations. We present analysis and empirical evidence suggesting that existing maintenance algorithms either scale poorly to large collections, or significantly degrade query resolution speed. In addition, we propose a new strategy for index maintenance that trades a strictly controlled amount of querying efficiency for greatly increased maintenance speed and scalability. Analysis and empirical results are presented that show that this new algorithm is a useful trade-off between indexing and querying efficiency. In scenarios described in Chapter 7, the use of the new maintenance algorithm reduces the time required to construct an index to under one sixth of the time taken by algorithms that maintain contiguous inverted lists. In addition to work on index maintenance, we present a new technique for accumulator pruning during ranked query evaluation, as well as providing evidence that existing approaches are unsatisfactory for collections of large size. Accumulator pruning is a key problem in both querying efficiency and overall text search system efficiency. Existing approaches either fail to bound the memory footprint required for query evaluation, or suffer loss of retrieval accuracy. In contrast, the new pruning algorithm can be used to limit the memory footprint of ranked query evaluation, and in our experiments gives retrieval accuracy not worse than previous alternatives. The results presented in this thesis are validated with robust experiments, which utilise collections of significant size, containing real data, and tested using appropriate numbers of real queries. The techniques presented in this thesis allow information retrieval applications to efficiently index and search changing collections, a task that has been historically problematic.
23

Kerr v. Danier Leather: an Analysis of the Difficulty to Enforce a Duty to Update Statements about the Future in the Context of Securities Regulation

Trindade Pereira, Diego 11 January 2011 (has links)
Forecasts, predictions and opinions about the future should not be treated in the same way as hard information is treated under the Securities Act. Because this type of soft information cannot be verified in advance, the imposition of liability in respect of these statements about the future may hinder their production and have a result that is adverse to the interests of investors – who would prefer to hear management speak candidly about its thoughts on the company’s future performance. This essay examines the way in which the Ontario Securities Act treats statements about the future, as well as the most important decision in this area up to the present: Kerr v. Danier Leather. It will also discuss whether there should be a duty to update predictions when the circumstances that formed the basis of these forecasts have changed significantly.
24

Kerr v. Danier Leather: an Analysis of the Difficulty to Enforce a Duty to Update Statements about the Future in the Context of Securities Regulation

Trindade Pereira, Diego 11 January 2011 (has links)
Forecasts, predictions and opinions about the future should not be treated in the same way as hard information is treated under the Securities Act. Because this type of soft information cannot be verified in advance, the imposition of liability in respect of these statements about the future may hinder their production and have a result that is adverse to the interests of investors – who would prefer to hear management speak candidly about its thoughts on the company’s future performance. This essay examines the way in which the Ontario Securities Act treats statements about the future, as well as the most important decision in this area up to the present: Kerr v. Danier Leather. It will also discuss whether there should be a duty to update predictions when the circumstances that formed the basis of these forecasts have changed significantly.
25

A Recursive Relative Prefix Sum Approach to Range Queries in Data Warehouses

Wu¡@, Fa-Jung 07 July 2002 (has links)
Data warehouses contain data consolidated from several operational databases and provide the historical, and summarized data which is more appropriate for analysis than detail, individual records. On-Line Analytical Processing (OLAP) provides advanced analysis tools to extract information from data stored in a Data Warehouse. OLAP is designed to provide aggregate information that can be used to analyze the contents of databases and data warehouses. A range query applies an aggregation operation over all selected cells of an OLAP data cube where the selection is specified by providing ranges of values for numeric dimensions. Range sum queries are very useful in finding trends and in discovering relationships between attributes in the database. There is a method, prefix sum method, promises that any range sum query on a data cube can be answered in constant time by precomputing some auxiliary information. However, it is hampered by its update cost. For today's applications, interactive data analysis applications which provide current or "near current" information will require fast response time and have reasonable update time. Since the size of a data cube is exponential in the number of its dimensions, rebuilding the entire data cube can be very costly and is not realistic. To cope with this dynamic data cube problem, several strategies have been proposed. They all use specific data structures, which require extra storage cost, to response range sum query fast. For example, the double relative prefix sum method makes use of three components: a block prefix array, a relative overlay array and a relative prefix array to store auxiliary information. Although the double relative prefix sum method improves the update cost, it increases the query time. In the thesis, we present a method, called the recursive relative prefix sum method, which tries to provide a compromise between query and update cost. In the recursive relative prefix sum method with k levels, we use a relative prefix array and k relative overlay arrays. From our performance study, we show that the update cost of our method is always less than that of the prefix sum method. In most of cases, the update cost of our method is less than that of the relative prefix sum method. Moreover, in most of cases, the query cost of our method is less than that of the double relative prefix sum method. Compared with the dynamic data cube method, our method has lower storage cost and shorter query time. Consequently, our recursive relative prefix sum method has a reasonable response time for ad hoc range queries on the data cube, while at the same time, greatly reduces the update cost. In some applications, however, updating in some regions may happen more frequently than others. We also provide a solution, called the weighted relative prefix sum} method, for this situation. Therefore, this method can also provide a compromise between the range sum query cost and the update cost, when the update probabilities of different regions are considered.
26

Application of Template Update to Visual Servo for a Deformable Object

Chou, Cheng-te 04 August 2008 (has links)
A monocular visual servo system for a target with variable shape has been developed in this paper. It consists of two parts: an image-processing unit and a servo control unit. For the image-processing unit, the motion between the target and image center is determined by a template match approach. The image is grabbed by the camera equipped on a Pan-Tilt robot and the robot is controlled to track the target by maintaining the target on the image center. However, the template needs to be updated when the target deforms. For the servo control unit, the movement is estimated by the Kalman filter technique to enhance the tracking performance of the visual servo system.
27

Cooperative Location Update in Wireless Mobile Networks

Ye, Cai-Fang 06 August 2008 (has links)
In this paper, in order to reduce the location update cost in wireless mobile networks, we propose a cooperative location update scheme. The proposed scheme first discovers the statistical relation between mobile stations according to their history of location update and paging. In order to reduce the total cost of mobility management, we propose integrating the cooperative location update scheme with the concurrent search scheme. We use analytical results and simulation results to justify the usage of the proposed approach.
28

Using a Rule-System as Mediator for Heterogeneous Databases, exemplified in a Bioinformatics Use Case

Schroiff, Anna January 2005 (has links)
<p>Databases nowadays used in all kinds of application areas often differ greatly in a number of properties. These varieties add complexity to the handling of databases, especially when two or more different databases are dependent.</p><p>The approach described here to propagate updates in an application scenario with heterogeneous, dependent databases is the use of a rule-based mediator. The system EruS (ECA rules updating SCOP) applies active database technologies in a bioinformatics scenario. Reactive behaviour based on rules is used for databases holding protein structures.</p><p>The inherent heterogeneities of the Structural Classification of Proteins (SCOP) database and the Protein Data Bank (PDB) cause inconsistencies in the SCOP data derived from PDB. This complicates research on protein structures.</p><p>EruS solves this problem by establishing rule-based interaction between the two databases. The system is built on the rule engine ruleCore with Event-Condition-Action rules to process PDB updates. It is complemented with wrappers accessing the databases to generate the events, which are executed as actions. The resulting system processes deletes and modifications of existing PDB entries and updates SCOP flatfiles with the relevant information. This is the first step in the development of EruS, which is to be extended in future work.</p><p>The project improves bioinformatics research by providing easy access to up-to-date information from PDB to SCOP users. The system can also be considered as a model for rule-based mediators in other application areas.</p>
29

Mathcad „The Next Generation“

Jordan, Dirk 25 May 2010 (has links) (PDF)
In diesem Vortrag wird die Weiterentwicklung von Mathcad gezeigt. Einmal als Update auf Mathcad 15 und dann die ganz neue Generation Mathcad Prime, mit neuer Benutzeroberfläche und neuem "look and feel". Sowie die neuen Funktionen und Möglichkeiten in Mathcad Prime.
30

Updating Library, Architectural Adaptation in Response to the Virtual Space of the Internet

Mallysh, Phillip Wilson 18 November 2013 (has links)
The explosion of new technologies, predominantly the increased inhabitation of the virtual space of the Internet, implies an emergent organization that challenges the existing structures of our established institutions. To understand how this shift; affects architecture, a physical construct, an existing university library (Killam Library at Dalhousie University in Halifax, Canada), is examined through its conception, implementation and subsequent use in order to be updated into the present condition. With the shift in format from the book to the digital realm the false perception of the Internet as a cloud describes society’s willingness to negate the physicality of information and transfer power towards large private corporations. Using architectural adaptation, the physicality of information can be re instated by representing equally and intensifying moments of stasis and movement. With the arising situation and the misconceptions that follow call for the re-examination and updating of the library typology to offer new spatial arrangements back to the public that are representative of the contemporary condition.

Page generated in 0.0366 seconds