• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Abordagem para guiar a reprodução de experimentos computacionais: aplicações em biologia computacional

Knop, Igor de Oliveira 31 March 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-02-09T10:36:22Z No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-09T13:54:43Z (GMT) No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) / Made available in DSpace on 2017-02-09T13:54:43Z (GMT). No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) Previous issue date: 2016-03-31 / A biologia sistêmica é uma das áreas emergentes mais poderosas no terceiro milênio por combinar de forma interdisciplinar conhecimentos e ferramentas da biologia, ciência da computação, medicina, química e engenharia. Entretanto, o contínuo desenvolvimento de experimentoscomputacionaiséacompanhadoporproblemascomoaintegraçãomanualde ferramentas para simulação e análise; a perda de modelos pela obsolescência de software; e a dificuldade para reprodução dos experimentos devido à falta de detalhes do ambiente de execução utilizado. A maioria dos modelos quantitativos publicados em biologia são perdidos porque eles, ou não estão mais disponíveis, ou porque são insuficientemente caracterizados para permitir sua reprodução. Este trabalho propõe uma abordagem para guiaroregistrodeexperimentosin silico comfoconasuareprodução.Aabordagemprevê a criação de uma série de anotações durante um trabalho em modelagem computacional, amparado por um ambiente de software, onde o pesquisador realiza as etapas de integração de ferramentas, descrição de processos e execução de experimentos. O objetivo é capturaroprocessodemodelagemdeformanãoinvasivaparaaumentaratrocadeconhecimento, permitir repetição e validação dos resultados e diminuir o retrabalho em grupos depesquisainterdisciplinares.Umambientecomputacionalprotótipofoiconstruídoedois fluxos de trabalho de ferramentas diferentes foram integradas como estudos de caso. O primeiro usa modelos da eletrofisiologia cardíaca para se construir novas aplicações sobre o ambiente. O segundo apresenta um novo uso para os metamodelos de dinâmica de sistemas para simular a resposta do sistema imune inato em uma seção planar de tecido. Foi observada a completa captura dos workflow de simulação e tratamento dos dados de saída nos dois experimentos de controle. O ambiente permitiu a reprodução e adaptação dos experimentos em três níveis diferentes: a criação de novos experimentos utilizando a mesma estrutura do original; a definição de novos aplicativos que utilizam variações da estrutura do experimento original; reaproveitamento do fluxo de trabalho para alterações nos modelos e condições originais. / Systems Biology is one of the most powerful emerging areas in the third millennium that combines, in an interdisciplinary way, knowledge and tools of Biology, Computer Science, Medicine, Chemistry and Engineering. However, the continued development of computational experiments is accompanied by problems such as manual integration of tools to simulation and analysis; loss of models due software obsolescence; and the difficulty to reproduce the experiments due to lack of details of the execution environment used. Most quantitative models published in Biology are lost because they, or are no longer available or are insufficiently characterized for reproduction. This work proposes an approach to guide the registration of in silico experiments focused on its reproduction. The approach involvesthecreationofaseriesofannotationsduringcomputationalmodeling,supported by a software environment where the researcher conducts the tool integration steps, process description and execution of experiments. The goal is to noninvasively capture the modelingprocesstoincreasetheexchangeofknowledge,allowrepetitionandvalidationof the results and reduce rework in interdisciplinary research groups. A prototype was built and two different workflows have been integrated as case studies. The first uses models and tools of cardiac electrophysiology to build new applications on the environment. The secondpresentsanewuseforthesystemdynamicsmetamodelingtosimulatetheresponse of the innate immune system in a planar section of tissue. The complete capture of workflow, consisting of simulation and processing of output data, in two control experiments, was observed. The environment allowed the reproduction and adaptation of experiments at three different levels: the creation of new experiments using the same structure as the original one; the definition of new applications that use variations of the structure of the original experiment; the reuse of workflow to change models and original condition.
22

Uma arquitetura de baixo acoplamento para execução de padrões de controle de fluxo em grades / A loosely coupled architecture to run workflow control-flow patterns in grid

Alexandre Ricardo Nardi 27 April 2009 (has links)
O uso de padrões de workflow para controle de fluxo em aplicações de e-Science resulta em maior produtividade por parte do cientista, permitindo que se concentre em sua área de especialização. Todavia, o uso de padrões de workflow para paralelização em grades permanece uma questão em aberto. Este texto apresenta uma arquitetura de baixo acoplamento e extensível, para permitir a execução de padrões com ou sem a presença de grade, de modo transparente ao cientista. Descreve também o Padrão Junção Combinada, que atende a diversos cenários de paralelização comumente encontrados em aplicações de e-Science. Com isso, espera-se auxiliar o trabalho do cientista, oferecendo maior flexibilidade na utilização de grades e na representação de cenários de paralelização. / The use of workflow control-flow patterns in e-Science applications results in productivity improvement, allowing the scientist to concentrate in his/her own research area. However, the use of workflow control-flow patterns for execution in grids remains an opened question. This work presents a loosely coupled and extensible architecture, allowing use of patterns with or without grids, transparently to the scientist. It also describes the Combined Join Pattern, compliant to parallelization scenarios, commonly found in e-Science applications. As a result, it is expected to help the scientist tasks, giving him or her greater flexibility in the grid usage and in representing parallelization scenarios.
23

Semantic Web Queries over Scientific Data

Andrejev, Andrej January 2016 (has links)
Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows. We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion. In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers. Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.
24

Adapting building information modeling (BIM) for affordable & sustainable housing

Dowhower, Justin Firuz 01 November 2010 (has links)
The purpose of this thesis is to determine if Building Information Modeling (BIM) is an effective means for encouraging stakeholder collaboration throughout the building design/construction process and improving upon affordable and sustainable strategies for infill housing development. The research methodology includes literature reviews, interviews, case studies, simulations, and experimentations. Literature reviews include documentation regarding BIM, housing affordability and policy, sustainable design strategies, and integrated design practice. I conducted interviews with local stakeholders who had participated in local affordable/sustainable housing projects. The primary case study was the Alley Flat Initiative (2003-2010) which I had the opportunity to be involved with in various capacities as a participant observer. Simulations were performed using a BIM software tool to ‘redesign’ the first Alley Flat Initiative prototype and compare design workflows. Finally, experimentation was done involving the instruction of BIM software and exploring its use within an academic design studio environment. The findings indicate four significant conclusions. First, the research suggests that inflated soft project costs (overhead, administration, and services) can be reduced if local city governments were to adopt BIM in conjunction with housing review and permitting processes. In addition, the city could use BIM data to quantify building impacts on energy and resources over time. Second, sustainability innovation can be easier to integrate within a BIM workflow due to the high-capacity of the software to exchange information with third-part analysis tools. One particular barrier that must be overcome, however, are financial barriers due to software and staff training costs associated with BIM technology. Third, BIM requires ‘front-loading’ projects with more information earlier in the design process, which encourages greater transparency and more direct collaboration between stakeholders. A fully leveraged BIM workflow may not be feasible beyond local small-scale architects and builders due to the relatively steep learning curve and higher software costs, but a hybrid approach might be possible depending on how residential construction practices and BIM software development evolves in the near future. And fourth, BIM can make project information centralized, accessible, and long-lasting – serving as a communication and learning tool across disciplines and between expert and non-expert participants. The product of this research includes recommendations for all stakeholder groups engaged in leveraging BIM for affordable and sustainable housing development. Additional related topics of inquiry which fell outside the scope of this research are also included for future investigation. / text
25

Soft proofing using liquid crystal displays

Leckner, Sara January 2004 (has links)
<p>Development of colour management systems, the level ofstandardisation, as well as the embedding of facilities forcolour management into computer operating systems and software,enables successful future interoperability of colour reproductionin the graphic arts industry. Yet colour reproduction from onemedium to another, still gives rise to inconsistencies.</p><p>This thesis investigates colour management and controlprocesses in premedia and press process workflows in graphic artsproduction, including standards, instruments and procedures. Thegoal is to find methods for higher efficiency and control ofcolour print media production processes, aiming at increasingcolour consistency and process automation and of reducingoverheads. The focus is on the control of colour data by displaysin prepress processes producing low quality paper products. Inthis respect the greatest interest of this thesis is on technicaland visual characteristics of displays with respect tp thereproduction of colour, especially desktop Thin Film TransistorLiquid Crystal Displays (TFTLCD) compared to portable TFTLCDs andCathod Ray Tube (CRT) monitors.</p><p>In order to reach the desired goal, this thesis is based on aliterature survey and empirical studies. The empirical studiesinclude both qualitative and quantitative methods, organised intothree parts:</p><p>    Colour process management: Analysed case studies of theimplementation of colour management in entire graphic artsproduction workflow processes.</p><p>    Display technology: LCD and CRT displays have been examinedthrough measurements to establish their fundamental strengthsand weaknesses in reproducing colours.</p><p>    Comparison of reproduction: A perceptual experiment hasbeen conducted to determine the ability of the disparatecomponents included in a colour management system to co-operateand match reproduced colour, according to the perceivedpreference of observers.</p><p>It was found that in most cases consistent colour fidelitydepends on the knowledge and experience of the actors involved inthe production process, including the utilisation of routines andequipment. Lack of these factors is not necessarily fatal for thefinal low quality paper colour product, but obstructs theautomation. In addition, increased digitalisation will increasethe importance of displays in such processes. The results showthat CRTs and desktop LCDs meet most of the demands of colourreproduction in various areas of low quality paper productionprocesses, e.g. newspaper production. However, some fundamentalaspects, such as low digital input values, viewing angles andcolour temperature, matters that concern characterisation andcalibration, still need to be developed. Concerning softproofing, the matching correspondence between hard and softcopies gives similar results for both CRT and LCDs forhigh-quality paper originals, if the luminance is decreased onthe LCD (to luminance levels of CRTs). Soft proofing of lowquality papers gives equally lower matching agreement for bothCRT and LCD, in this case when the luminance of the LCD is sethigher (e.g. about twice the levels luminance levels ofCRTs).</p><p><b>Keywords:</b>Displays, LCD, CRT, premedia, prepress, softproof, workflows, newspaper, colour management systems, colourcontrol, colour reproduction</p>
26

Interactive scheduling and visualisation / Interactive scheduling and visualisation

Skalický, Tomáš January 2012 (has links)
The goal of this thesis was to design and implement a graphical tool for visualization and editing of schedules which would provide a function for automatic repairing of violated constraints in the schedule. The resulting application called a Gantt Viewer is integrated to the FlowOpt project that represents a complex solution for modeling workflows, creation of schedules from them and analysis of these schedules. The application has been developed with the focus on intuitiveness of the user interface and performance during the management of large schedules. It enables the user to visualize extended manufacturing schedules thanks to the cooperation with other modules of the FlowOpt project. Moreover, the Gantt Viewer incorporates a repair tool exploiting a new Repair-DTP algorithm which is both introduced and demonstrated in this work.
27

Estratégias de escalonamento de workflows com tarefas paralelas e sequenciais em grades computacionais. / Strategies for scheduling workflows composed of sequential and parallel tasks on grid environments.

Stanzani, Silvio Luiz 18 October 2013 (has links)
A demanda por alto desempenho é um desafio enfrentado por diversas aplicações científicas. Nesse sentido, ambientes para processamento distribuído, tais como, clusters e grades computacionais, têm sido desenvolvidos para prover suporte ao uso de diversos recursos simultaneamente para uma mesma aplicação. Aplicações computacionalmente intensivas são organizadas em workflows e executadas com suporte de middlewares para abstrair a complexidade de uso de tais ambientes. Em ambientes de grade computacional, a execução de workflows contendo tarefas sequenciais e tarefas com paralelismo interno, obtendo bom desempenho, é um desafio, devido à heterogeneidade e comportamento dinâmico do ambiente. Nesse sentido, o escalonamento de workflows em ambientes de grade computacional é essencial. O problema de escalonamento de tarefas, em sua forma geral, é NPCompleto, dessa forma, o estudo do escalonamento de workflows em ambientes de grade computacional é fundamental para aprimorar a execução de aplicações computacionalmente intensivas. O objetivo dessa tese é propor estratégias de escalonamento de workflows, que exploram os seguintes aspectos: Avaliação da possibilidade de executar cada tarefa com paralelismo interno usando recursos de múltiplos clusters; Adaptação de planos de escalonamento no momento da submissão de novos workflows. Foram desenvolvidas duas estratégias: a primeira é uma estratégia para escalonamento estático de workflows, que considera o ambiente dedicado a execução de um workflow. A segunda foi desenvolvida para ser utilizada em conjunto com a primeira, para melhorar o tempo de resposta de múltiplos workflows que podem ser submetidos em diferentes momentos. As estratégias propostas foram avaliadas em um ambiente de simulação. / The demand for high performance is a common problem in many scientific applications. In this sense, distributed processing environments such as cluster, grid computing and multi-cluster environments have been developed to provide support for the use of several resources simultaneously for the same application. Computationally intensive applications are structured as workflows and executed with the support of middleware to abstract the complexity of using such environments. In grid computing environments the execution of workflows containing sequential and parallel tasks, with good performance is a challenge due to the heterogeneity and dynamic behavior of the environment. In this sense, the scheduling of workflows on grid computing environments is essential. The task scheduling problem in its general form is NP-Complete, in this sense, the study concerning workflow scheduling in grid computing environments is fundamental to improve the performance of computationally intensive applications. The aim of this thesis is to propose strategies for scheduling workflows that exploit the following aspects: Explore the possibility of performing single parallel tasks using multiple clusters; Adaptation plans escalation in accordance with the submission of new workflows. Two strategies were developed: the first one is a strategy for static scheduling of workflows, which considers a dedicated environment to the execution of a workflow. The second one was developed to use in conjunction with the first one, in order to improve the response time of multiple workflows that can be submitted at different times. The proposed strategies were evaluated in a simulation environment.
28

Estratégias de escalonamento de workflows com tarefas paralelas e sequenciais em grades computacionais. / Strategies for scheduling workflows composed of sequential and parallel tasks on grid environments.

Silvio Luiz Stanzani 18 October 2013 (has links)
A demanda por alto desempenho é um desafio enfrentado por diversas aplicações científicas. Nesse sentido, ambientes para processamento distribuído, tais como, clusters e grades computacionais, têm sido desenvolvidos para prover suporte ao uso de diversos recursos simultaneamente para uma mesma aplicação. Aplicações computacionalmente intensivas são organizadas em workflows e executadas com suporte de middlewares para abstrair a complexidade de uso de tais ambientes. Em ambientes de grade computacional, a execução de workflows contendo tarefas sequenciais e tarefas com paralelismo interno, obtendo bom desempenho, é um desafio, devido à heterogeneidade e comportamento dinâmico do ambiente. Nesse sentido, o escalonamento de workflows em ambientes de grade computacional é essencial. O problema de escalonamento de tarefas, em sua forma geral, é NPCompleto, dessa forma, o estudo do escalonamento de workflows em ambientes de grade computacional é fundamental para aprimorar a execução de aplicações computacionalmente intensivas. O objetivo dessa tese é propor estratégias de escalonamento de workflows, que exploram os seguintes aspectos: Avaliação da possibilidade de executar cada tarefa com paralelismo interno usando recursos de múltiplos clusters; Adaptação de planos de escalonamento no momento da submissão de novos workflows. Foram desenvolvidas duas estratégias: a primeira é uma estratégia para escalonamento estático de workflows, que considera o ambiente dedicado a execução de um workflow. A segunda foi desenvolvida para ser utilizada em conjunto com a primeira, para melhorar o tempo de resposta de múltiplos workflows que podem ser submetidos em diferentes momentos. As estratégias propostas foram avaliadas em um ambiente de simulação. / The demand for high performance is a common problem in many scientific applications. In this sense, distributed processing environments such as cluster, grid computing and multi-cluster environments have been developed to provide support for the use of several resources simultaneously for the same application. Computationally intensive applications are structured as workflows and executed with the support of middleware to abstract the complexity of using such environments. In grid computing environments the execution of workflows containing sequential and parallel tasks, with good performance is a challenge due to the heterogeneity and dynamic behavior of the environment. In this sense, the scheduling of workflows on grid computing environments is essential. The task scheduling problem in its general form is NP-Complete, in this sense, the study concerning workflow scheduling in grid computing environments is fundamental to improve the performance of computationally intensive applications. The aim of this thesis is to propose strategies for scheduling workflows that exploit the following aspects: Explore the possibility of performing single parallel tasks using multiple clusters; Adaptation plans escalation in accordance with the submission of new workflows. Two strategies were developed: the first one is a strategy for static scheduling of workflows, which considers a dedicated environment to the execution of a workflow. The second one was developed to use in conjunction with the first one, in order to improve the response time of multiple workflows that can be submitted at different times. The proposed strategies were evaluated in a simulation environment.
29

Grupinio darbo procesai paskirstytose redakcinėse sistemose / Groupware processes in distributed editorial sistems

Šimkevičius, Tomas 28 May 2005 (has links)
This paper analyses different systems of and concepts to editorial system processes management. On this analysis a new approach in the field of triple (user-group-role) based access control and message/step based workflow for editorial systems. Used technologies in and basic concepts of access control and workflows methods are presented and discussed based on their impact towards software development.
30

Business Policy Modeling and Enforcement in Relational Database Systems

Ataullah, Ahmed January 2014 (has links)
Database systems maintain integrity of the stored information by ensuring that modifications to the database comply with constraints designed by the administrators. As the number of users and applications sharing a common database increases, so does the complexity of the set of constraints that originate from higher level business processes. The lack of a systematic mechanism for integrating and reasoning about a diverse set of evolving and potentially interfering policies manifested as database level constraints makes corporate policy management within relational systems a chaotic process. In this thesis we present a systematic method of mapping a broad set of process centric business policies onto database level constraints. We exploit the observation that the state of a database represents the union of all the states of every ongoing business process and thus establish a bijective relationship between progression in individual business processes and changes in the database state space. We propose graphical notations that are equivalent to integrity constraints specified in linear temporal logic of the past. Furthermore we demonstrate how this notation can accommodate a wide array of workflow patterns, can allow for multiple policy makers to implement their own process centric constraints independently using their own logical policy models, and can model check these constraints within the database system to detect potential conflicting constraints across several different business processes. A major contribution of this thesis is that it bridges several different areas of research including database systems, temporal logics, model checking, and business workflow/policy management to propose an accessible method of integrating, enforcing, and reasoning about the consequences of process-centric constraints embedded in database systems. As a result, the task of ensuring that a database continuously complies with evolving business rules governed by hundreds of processes, which is traditionally handled by an army of database programmers regularly updating triggers and batch procedures, is made easier, more manageable, and more predictable.

Page generated in 0.03 seconds