• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 360
  • 113
  • 30
  • 10
  • 9
  • 9
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 691
  • 691
  • 691
  • 160
  • 131
  • 116
  • 101
  • 99
  • 94
  • 89
  • 89
  • 67
  • 65
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Data management in forecasting systems : optimization and maintenance / Gestion des données dans les systèmes prévisionnels : optimisation et maintenance

Feng, Haitang 17 October 2012 (has links)
Les systèmes prévisionnels reposent généralemnt sur des entrepôts de données pour le stockage et sur les outils OLAP pour la visualisation. Des données prédictives agrégées pourraient être modifiées. Par conséquent, la problématique derecherche peut être décrite comme la propagation d'une modification faite sur un agrégat à travers des hiérachies et des dimensions dans un environnement d'entrepôt de données. Il existe un grand nombre de travaux de recherche sur les problèmes de maintenance de vues. Cependant, à notre connaissance, l'impact de la mise à jour interactive d'un agrégat sur les données de base n'a pas été exploré. Cette thèse CIFRE est soutenue par l'ANRT et l'entreprise Anticipeo. L'application Anticipeo est un système prévisionnel de ventes, qui prédit des ventes. Elle était précise avec des résultats de la prédiction, mais le temps de réponse était un problème. Cette thèse comporte deux parties. La première partie est d'identifier la provenance de la latence. Nous avons proposé une méthodologie s'appuyant sur différentes approches et techniques pour améliorer les performances d'une application. Cependant, la propagation d'une modification effectuée sur une agrégat dans un entrpôt de données ne pouvait pas être résolue par ces biais techniques. La deuxième partie du travail consiste en la proposition d'un nouvel algorithme (PAM - Propagation de modification basée sur une agrégat) avec une version étendue (PAM II) pour cette situation. Les algorithmes identifient et mettent àjour les ensembles exactes de données sources et d'aurtes agrégats influencés par la modification d'agrégat. La version optimisées PAM II réalise une meilleure performance par rapport à PAM quand l'utilisation d'une sémantique supplémentaire (par exemple, les dépendances) est possible. Les expériences sur des données réelles d'Anticipeo ont montré que l'algorithme PAM et son extension apportent de meilleures performances dans la propagation des mises à jour. / Forecasting systems are usually based on data warehouses for data strorage, and OLAP tools for historical and predictive data visualization. Aggregated predictive data could be modified. Hence, the research issue can be described as the propagation of an aggregate-based modification in hirarchies and dimensions in a data warehouse enironment. Ther exists a great number of research works on related view maintenance problems. However, to our knowledge, the impact of interactive aggregate modifications on raw data was not investigated. This CIFRE thesis is supported by ANRT and the company Anticipeo. The application of Anticipeo is a sales forecasting system that predicts future sales in order to draw appropriate business strategy in advance. By the beginning of the thesis, the customers of Anticipeo were satisfied the precision of the prediction results, but not with the response time. The work of this thesis can be generalized into two parts. The first part consists in au audit on the existing application. We proposed a methodology relying on different technical solutions. It concerns the propagation of an aggregate-based modification in a data warehouse. the second part of our work consists in the proposition of a newx allgorithms (PAM - Propagation of Aggregated-baseed Modification) with an extended version (PAM II) to efficiently propagate in aggregate-based modification. The algorithms identify and update the exact sets of source data anf other aggregated impacted by the aggregated modification. The optimized PAM II version archieves better performance compared to PAM when the use of additional semantics (e.g. dependencies) is possible. The experiments on real data of Anticipeo proved that the PAM algorithm and its extension bring better perfiormance when a backward propagation.
502

Decision Support System for the Evaluation and Comparison of Concession Project Investments

McCowan, Alison Kate, n/a January 2004 (has links)
Governments of developed and developing countries alike are unable to fund the construction and maintenance of vital physical infrastructure such as roads, railways, water and wastewater treatment plants, and power plants. Thus, they are more and more turning to the private sector as a source of finance through procurement methods such as concession contracts. The most common form of concession contract is the Build-Operate-Transfer (BOT) contract, where a government (Principal) grants a private sector company (Promoter) a concession to build, finance, operate and maintain a facility and collect revenue over the concession period before finally transferring the facility, at no cost to the Principal, as a fully operational facility. Theoretically speaking, these projects present a win-win-win solution for the community as well as both private and public sector participants. However, with the opportunity for private sector companies to earn higher returns comes greater risk. This is despite the fact that concession projects theoretically present a win-win-win solution to the problem of infrastructure provision. Unfortunately, this has not been the case in a number of countries including Australia. Private sector participants have admitted that there are problems that must be addressed to improve the process. Indeed they have attributed the underperformance of concession projects to the inability of both project Principals and Promoters to predict the impact of all financial and non-financial (risk) factors associated with concession project investments (CPIs) and to negotiate contracts to allow for these factors. Non-financial project aspects, such as social, environmental, political, legal and market share factors, are deemed to be important; but these aspects would usually be considered to lie outside the normal appraisal process. To allow for the effects of such qualitative aspects, the majority of Principal or promoting organisations resort to estimating the necessary money contingencies without an appropriate quantification of the combined effects of financial and non-financial (risks and opportunities) factors. In extreme cases, neglect of non-financial aspects can cause the failure of a project despite very favourable financial components; or can even cause the failure to go-ahead with a project that may have been of great non-financial benefit due to its projected ordinary returns. Hence, non-financial aspects need careful analysis and understanding so that they can be assessed and properly managed. It is imperative that feasibility studies allow the promoting organisation to include a combination of financial factors and non-financial factors related to the economic environment, project complexity, innovation, market share, competition, and the national significance of the project investment. While much research has already focused on the classification of CPI non-financial (risk) factors, and the identification of interdependencies between risk factors on international projects, no attempt has yet been made to quantify these risk interdependencies. Building upon the literature, this thesis proposes a generic CPI risk factor framework (RFF) including important interdependencies, which were verified and quantified using input provided by practitioners and researchers conversant with risk profiles of international and/or concession construction projects. Decision Support Systems (DSSs) are systems designed to assist in the decision making process by providing all necessary information to the analyst. There are a number of DSSs that have been developed over recent years for the evaluation of high-risk construction project investments, such as CPIs, which incorporate the analysis of both financial and non-financial (risk) aspects of the investment. However, although these DSSs have been useful to practitioners and researchers alike, they have not offered a satisfactory solution to the modelling problem and are all limited in their practical application for various reasons. Thus, the construction industry lacks a DSS that is capable of evaluating and comparing several CPI options, taking into consideration both financial and non-financial aspects of an investment, as well as including the uncertainties commonly encountered at the feasibility stage of a project, in an efficient and effective manner. These two criteria, efficiency and effectiveness, are integral to the usefulness and overall acceptance of the developed DSS in industry. This thesis develops an effective and efficient DSS to evaluate and compare CPI opportunities at the feasibility stage. The novel DSS design is based upon a combination of: (1) the mathematical modelling technique and financial analysis model that captures the true degree of certainty surrounding the project; and (2) the decision making technique and RFF that most closely reproduces the complexity of CPI decisions. Overall, this thesis outlines the methodology followed in the development of the DSS – produced as a stand-alone software product – and demonstrates its capabilities through a verification and validation process using real-life CPI case studies.
503

Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

Zhao, Jane Qiong January 2007 (has links)
A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily.
504

"The Machine Made Me Do It!" : An Exploration of Ascribing Agency and Responsibility to Decision Support Systems

Haviland, Hannah January 2005 (has links)
<p>Are agency and responsibility solely ascribable to humans? The advent of artificial intelligence (AI), including the development of so-called “affective computing,” appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realization that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics. Despite the increasing attention of many philosophers and ethicists to the development of AI, there continues to exist a fair amount of conceptual muddiness on the conditions for assigning agency and responsibility to such systems, from both an ethical and a legal perspective. Moral and legal philosophies may overlap to a high degree, but are neither interchangeable nor identical. This paper attempts to clarify the actual and hypothetical ethical and legal situations governing a very particular type of advanced, or “intelligent,” computer system: medical decision support systems (MDSS) that feature AI in their system design. While it is well-recognized that MDSS can be categorized by type and function, further categorization of their mediating effects on users and patients is needed in order to even begin ascribing some level of moral or legal responsibility. I conclude that various doctrines of Anglo legal systems appear to allow for the possibility of assigning specific types of agency – and thus specific types of legal responsibility – to some types of MDSS. Strong arguments for assigning moral agency and responsibility are still lacking, however.</p>
505

Knowledge Discovery in a Review of Monograph Acquisitions at an Academic Health Sciences Library

M Rodriguez 07 April 2008 (has links)
This study evaluates monograph acquisition decisions at an academic health sciences library using circulation and acquisitions data. The goal was to provide insight regarding how to allocate library funds to support research and education in disciplines of interest to the library user base. Data analysis revealed that allocations in 13 subject areas should be reviewed as the cost of circulation was greater than the average cost of circulation of the sample and the average cost of monographs was higher in these subject areas than the average cost of monographs in the sample. In contrast, 13 subjects returned cost of circulation rates lower than the average cost of circulation of the sample. These subjects merit stable budget allocation or increased allocation depending upon collection needs. Overall, this study found that this library is allocating a majority of resources to subjects with above average rates of use.
506

"The Machine Made Me Do It!" : An Exploration of Ascribing Agency and Responsibility to Decision Support Systems

Haviland, Hannah January 2005 (has links)
Are agency and responsibility solely ascribable to humans? The advent of artificial intelligence (AI), including the development of so-called “affective computing,” appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realization that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics. Despite the increasing attention of many philosophers and ethicists to the development of AI, there continues to exist a fair amount of conceptual muddiness on the conditions for assigning agency and responsibility to such systems, from both an ethical and a legal perspective. Moral and legal philosophies may overlap to a high degree, but are neither interchangeable nor identical. This paper attempts to clarify the actual and hypothetical ethical and legal situations governing a very particular type of advanced, or “intelligent,” computer system: medical decision support systems (MDSS) that feature AI in their system design. While it is well-recognized that MDSS can be categorized by type and function, further categorization of their mediating effects on users and patients is needed in order to even begin ascribing some level of moral or legal responsibility. I conclude that various doctrines of Anglo legal systems appear to allow for the possibility of assigning specific types of agency – and thus specific types of legal responsibility – to some types of MDSS. Strong arguments for assigning moral agency and responsibility are still lacking, however.
507

On the utilisation of information technology for the management of profitable maintenance

Kans, Mirka January 2008 (has links)
Maintenance is one area of business that recently has been considered as an activity contributing efficiently to the companies' strategic goals. Understanding the way maintenance could utilise modern technology such as computerised tools or digital data processing is one way to make maintenance profitable. Current research lack appropriate methods for determining data and IT requirements, as well as understanding the way IT could be utilised for enhancing maintenance profitability. This thesis studies and develops tools, methods and theories of how information technology can be utilised for maintenance management in order to reach profitable maintenance. The main research problem is therefore: What are the demands on information technology systems to achieve profitable maintenance? The main problem has been divided into three research questions: RQ1) What are the demands on data and information technology systems for strategic management of maintenance?, RQ2) How can we identify relevant data and information technology systems required to achieve cost-effective maintenance decisions? and RQ3) How can we describe the utilisation of information technology within maintenance? This thesis is based on a systems theory approach, where maintenance is not seen as an isolated activity, but something that interacts, affects and is affected by several other activities, such as production, logistic and quality. Several methods have been used for answering the research questions, but the theory-testing case study method dominates. Main results achieved in the thesis are models and theory for 1) creating a relevant set of data for cost-effective mainte-nance decisions, 2) monitoring the performance of maintenance, suggest invest-ment possibilities and show maintenance contribution on company strategic level, 3) identifying IT systems requirements demanded to achieve profitable maintenance and 4) assessing the IT maturity of a maintenance organisation for the effective utilisation of IT systems. As an illustrative example of computerised maintenance management demands a conceptual decision support model has been developed, which aims at filling the gaps of poor IT coverage for strategic maintenance decision-making. This thesis concludes that the demands of data and IT applications must be connected to the overall maintenance demands, which are reflected in the maintenance goals, purposes and strategy, in order to achieve profitable maintenance. Furthermore, structured methods that ensure the connection between maintenance business goals and data or IT demands are of importance. The ability to make use if IT within maintenance is reflected in the relative IT maturity of the maintenance organisation. Being able to define the IT maturity allows for choosing the most appropriate IT tool to invest in, so that current and future needs of IT support are also covered for with maximum benefit and minimum cost.
508

Modeling as a Tool to Support Self-Management of Type 1 Diabetes

Bergenholm, Linnéa January 2013 (has links)
Type 1 diabetes (T1D) is an auto-immune disease characterized by insulin-deficiency. Insulin is a metabolic hormone that is involved in lowering blood glucose (BG) levels in order to control BG level to a tight range. In T1D this glycemic control is lost, causing chronic hyperglycemia (excess glucose in blood stream). Chronic hyperglycemia damages vital tissues. Therefore, glycemic control must be restored. A common therapy for restoring glycemic control is intensive insulin therapy, where the missing insulin is replaced with regular insulin injections. When dosing this compensatory insulin many factors that affect glucose metabolism must be considered. Linkura is a company that has developed tools for monitoring the most important factors, which are meals and exercise. In the Linkura meal and exercise tools, the nutrition content in meals and the calorie consumption during exercise are estimated. Another tool designed to aid control of BG is the bolus calculator. Bolus calculators use input of BG level, carbohydrate intake, and insulin history to estimate insulin need. The accuracy of these insulin bolus calculations suffer from two problems. First, errors occur when users inaccurately estimate the carbohydrate content in meals. Second, exercise is not included in bolus calculations. To reduce these problems, it was suggested that the Linkura web tools could be utilized in combination with a bolus calculator. For this purpose, a bolus calculator was developed. The bolus calculator was based on existing models that utilize clinical parameters to relate changes in BG levels to meals, insulin, and exercise stimulations. The bolus calculator was evaluated using data collected from Linkura's web tools. The collected data showed some inconsistencies which cannot be explained by any model.  The performance of the bolus calculator in predicting BG levels using general equations to derive the clinical parameters was inadequate. Performance was increased by adopting an update-algorithm where the clinical parameters were updated daily using previous data. Still, better model performance is prefered for use in a bolus calculator.   The results show potential in developing bolus calculator tools combined with the Linkura tools. For such bolus calculator, further evaluation on modeling long-term exercise and additional safety features minimizing risk of hypoglycemia are required.
509

Using logic-based approaches to explore system architectures for systems engineering

Kerzhner, Aleksandr A. 21 May 2012 (has links)
This research is focused on helping engineers design better systems by supporting their decision making. When engineers design a system, they have an almost unlimited number of possible system alternatives to consider. Modern systems are difficult to design because of a need to satisfy many different stakeholder concerns from a number of domains which requires a large amount of expert knowledge. Current systems engineering practices try to simplify the design process by providing practical approaches to managing the large amount of knowledge and information needed during the process. Although these methods make designing a system more practical, they do not support a structured decision making process, especially at early stages when designers are selecting the appropriate system architecture, and instead rely on designers using ad hoc frameworks that are often self-contradictory. In this dissertation, a framework for performing architecture exploration at early stages of the design process is presented. The goal is to support more rational and self-consistent decision making by allowing designers to explicitly represent their architecture exploration problem and then use computational tools to perform this exploration. To represent the architecture exploration problem, a modeling language is presented which explicitly models the problem as an architecture selection decision. This language is based on the principles of decision-based design and decision theory, where decisions are made by picking the alternative that results in the most preferred expected outcome. The language is designed to capture potential alternatives in a compact form, analysis knowledge used to predict the quality of a particular alternative, and evaluation criteria to differentiate and rank outcomes. This language is based on the Object Management Group's System Modeling Language (SysML). Where possible, existing SysML constructs are used; when additional constructs are needed, SysML's profile mechanism is used to extend the language. Simply modeling the selection decision explicitly is not sufficient, computational tools are also needed to explore the space of possible solutions and inform designers about the selection of the appropriate alternative. In this investigation, computational tools from the mathematical programming domain are considered for this purpose. A framework for modeling an architecture selection decision in mixed-integer linear programming (MIP) is presented. MIP solvers can then solve the MIP problem to identify promising candidate architectures at early stages of the design process. Mathematical programming is a common optimization domain, but it is rarely used in this context because of the difficulty of manually formulating an architecture selection or exploration problem as a mathematical programming optimization problem. The formulation is presented in a modular fashion; this enables the definition of a model transformation that can be applied to transform the more compact SysML representation into the mathematical programming problem, which is also presented. A modular superstructure representation is used to model the design space; in a superstructure a union of all potential architectures is represented as a set of discrete and continuous variables. Algebraic constraints are added to describe both acceptable variable combinations and system behavior to allow the solver to eliminate clearly poor alternatives and identify promising alternatives. The overall framework is demonstrated on the selection of an actuation subsystem for a hydraulic excavator. This example is chosen because of the variety of potential architecture embodiments and also a plethora of well-known configurations which can be used to verify the results.
510

An Approach for the Robust Design of Data Center Server Cabinets

Rolander, Nathan Wayne 29 November 2005 (has links)
The complex turbulent flow regimes encountered in many thermal-fluid engineering applications have proven resistant to the effective application of systematic design because of the computational expense of model evaluation and the inherent variability of turbulent systems. In this thesis the integration of the Proper Orthogonal Decomposition (POD) for reduced order modeling of turbulent convection with the application of robust design principles is proposed as a practical design approach. The POD has been used successfully to create low dimensional steady state flow models within a prescribed range of parameters. The underlying foundation of robust design is to determine superior solutions to design problems by minimizing the effects of variation on system performance, without eliminating their causes. The integration of these constructs utilizing the compromise Decision Support Problem (DSP) results in an efficient, effective robust design approach for complex turbulent convective systems. The efficacy of the approach is illustrated through application to the configuration of data center server cabinets. Data centers are computing infrastructures that house large quantities of data processing equipment. The data processing equipment is stored in 2 m high enclosures known as cabinets. The demand for increased computational performance has led to very high power density cabinet design, with a single cabinet dissipating up to 20 kW. The computer servers are cooled by turbulent convection and have unsteady heat generation and cooling air flows, yielding substantial inherent variability, yet require some of the most stringent operational requirements of any engineering system. Through variation of the power load distribution and flow parameters, such as the rate of cooling air supplied, thermally efficient configurations that are insensitive to variations in operating conditions are determined. This robust design approach is applied to three common data center server cabinet designs, in increasing levels of modeling detail and complexity. Results of the application of this approach to the example problems studied show that the resulting thermally efficient configurations are capable of dissipating up to a 50% greater heat load and 15% decrease in the temperature variability using the same cooling infrastructure. These results are validated rigorously, including comparison of detailed CFD simulations with experimentally gathered temperature data of a mock server cabinet. Finally, with the approach validated, augmentations to the approach are considered for multi-scale design, extending approaches domain of applicability.

Page generated in 0.0953 seconds