• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 659
  • 609
  • 107
  • 68
  • 39
  • 18
  • 13
  • 13
  • 8
  • 7
  • 7
  • 7
  • 5
  • 4
  • 4
  • Tagged with
  • 1790
  • 582
  • 242
  • 240
  • 194
  • 180
  • 178
  • 166
  • 161
  • 159
  • 158
  • 157
  • 149
  • 137
  • 134
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

RiSE reference model for software reuse adoption in brazilian companies

Garcia, Vinícius Cardoso 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:52:04Z (GMT). No. of bitstreams: 2 arquivo3101_1.pdf: 6739331 bytes, checksum: b2ce7e13223b4c79b74bfc1a7d45bf1c (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Fundação de Amparo a Pesquisa do Estado da Bahia / Muitas organizações estão planejando investir ou já investiram dinheiro, tempo e recursos no reúso de software. Com esse investimento, essas organizações esperam melhorar a sua competitividade no mercado por meio da redução de custos e esforço, aumento da produtividade e melhoria da qualidade e da confiabilidade dos produtos de software desenvolvidos. Um problema comum é que as abordagens de reúso nas organizações são consideradas, normalmente, como um problema de adoção tecnológica (ambientes e ferramentas) e de processos, que focam nos aspectos técnicos do reúso. Neste cenário, processos de adoção de reúso - ou estratégias, modelos ou programas - têm se destacado na área como um facilitador para obter os benefícios associados ao reúso de software. No entanto, os processos existentes apresentam alguns problemas cruciais, como, por exemplo, serem fortemente relacionados a tecnologias específicas; demandarem um alto investimento inicial; além de não definirem de forma sistemática e suficientemente detalhada as atividades, papéis, entradas e as saídas de todo o processo. Assim, este trabalho propõe um modelo de referência de reuso de software para auxiliar nos processos de adoção e avaliação da capacidade de reúso nas organizações, baseado no estado da arte e da prática da área. Essa definição foi embasada por estudos detalhados sobre processos de adoção de reúso, modelos de referência de reúso e métodos de avaliação de capacidade em reutilização, envolvendo pesquisas informais, estudos empíricos e relatos de empresas. Com esta tese, pretende-se demonstrar que é possível estabelecer, para as empresas que desejam adotar reúso, um caminho mais seguro e com menores riscos e custos do que uma estratégia de reúso ad-hoc. Neste cenário, espera-se alcançar os seguintes objetivos: (i) aperfeiçoar o desempenho de alguns aspectos do desenvolvimento por meio de práticas de reúso (custo, qualidade, produtividade, competitividade da organização, entre outros); e (ii) redução dos riscos na adoção e/ou aperfeiçoamento de um programa de reúso, dando suporte a um processo incremental
52

Factors Affecting the Design and Use of Reusable Components

Anguswamy, Reghu 31 July 2013 (has links)
Designing software components for future reuse has been an important area in software engineering. A software system developed with reusable components follows a "with" reuse process while a component designed to be reused in other systems follows a "for" reuse process. This dissertation explores the factors affecting design for reuse and design with reusable components through empirical studies. The studies involve Java components implementing a particular algorithm, a stemming algorithm that is widely used in the conflation domain. The method and empirical approach are general and independent of the programming language. Such studies may be extended to other types of components, for example, components implementing data structures such as stacks, queues etc. Design for reuse: In this thesis, the first study was conducted analyzing one-use and equivalent reusable components for the overhead in terms of component size, effort required, number of parameters, and productivity. Reusable components were significantly larger than their equivalent one-use components and had significantly more parameters. The effort required for the reusable components was higher than for one-use components. The productivity of the developers was significantly lower for the reusable components compared to the one-use components. Also, during the development of reusable components, the subjects spent more time on writing code than designing the components, but not significantly so.  A ranking of the design principles by frequency of use is also reported. A content analysis performed on the feedback is also reported and the reasons for using and not using the reuse design principles are identified. A correlation analysis showed that the reuse design principles were, in general, used independently of each other. Design with reuse: Through another empirical study, the effect of the size of a component and the reuse design principles used in building the component on the ease of reuse were analyzed. It was observed that the higher the complexity the lower the ease of reuse, but the correlation is not significant. When considered independently, four of the reuse design principles: well-defined interface, clarity and understandability, generality, and separate concepts from content significantly increased the ease of reuse while commonality and variability analysis significantly decreased the ease of reuse, and documentation did not have a significant impact on the ease of reuse. Experience in the programming language had no significant relationship with the reusability of components. Experience in software engineering and software reuse showed a relationship with reusability but the effect size was small. Testing components before integrating them into a system was found to have no relationship with the reusability of components. A content analysis of the feedback is presented identifying the challenges of components that were not easy to reuse. Features that make a component easily reusable were also identified. The Mahalanobis-Taguchi Strategy (MTS) was employed to develop a model based on Mahalanobis Distance  to identify the factors that can detect if a component is easy to reuse or not. The identified factors within the model are: size of a component, a set of reuse design principles (well-defined interface, clarity and understandability, commonality and variability analysis, and generality), and component testing. / Ph. D.
53

Incremental Reuse

Zunis, Courtney 27 October 2017 (has links)
No description available.
54

The Test and Training Enabling Architecture (TENA) Enabling Technology for the Joint Mission Environment Test Capability (JMETC) in Live, Virtual, and Constructive (LVC) Environments

Hudgins, Gene, Poch, Keith, Secondine, Juana 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / The Joint Mission Environment Test Capability (JMETC) is a distributed live, virtual, and constructive (LVC) testing capability developed to support the acquisition community and to demonstrate Net-Ready Key Performance Parameters (KPP) requirements in a customer-specific Joint Mission Environment (JME). JMETC, using the Test and Training Enabling Architecture (TENA), provides connectivity to the Services' distributed test capabilities and simulations, and Industry test resources. TENA is well-designed for supporting JMETC events through its architecture and software capabilities which enable interoperability among range instrumentation systems, facilities, and simulations. TENA, used in major exercises and distributed test events, is also interfacing with other emerging range systems.
55

The Test and Training Enabling Architecture (TENA) Enabling Technology for the Joint Mission Environment Test Capability (JMETC) in Live, Virtual, and Constructive (LVC) Environments

Hudgins, Gene, Poch, Keith, Secondine, Juana 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / The Joint Mission Environment Test Capability (JMETC) is a distributed live, virtual, and constructive (LVC) testing capability developed to support the acquisition community and to demonstrate Net-Ready Key Performance Parameters (KPP) requirements in a customer-specific Joint Mission Environment (JME). JMETC, using the Test and Training Enabling Architecture (TENA), provides connectivity to the Services' distributed test capabilities and simulations, and Industry test resources. TENA is well-designed for supporting JMETC events through its architecture and software capabilities which enable interoperability among range instrumentation systems, facilities, and simulations. TENA, used in major exercises and distributed test events, is also interfacing with other emerging range systems.
56

Chemical phosphorus removal and its influence on sewage sludge particulates and metal availability

Knight, Jonathan James January 2000 (has links)
No description available.
57

Component interaction in distributed systems

Pryce, Nathaniel Graham January 2000 (has links)
No description available.
58

Reusable components for knowledge modelling

Motta, Enrico January 1998 (has links)
In this work I illustrate an approach to the development of a library of problem solving components for knowledge modelling. This approach is based on an epistemological modelling framework, the Task/Method/Domain/Application (TMDA) model, and on a principled methodology, which provide an integrated view of both library construction and application development by reuse. The starting point of the proposed approach is given by a task ontology. This formalizes a conceptual viewpoint over a class of problems, thus providing a task-specific framework, which can be used to drive the construction of a task model through a process of model-based knowledge acquisition. The definitions in the task ontology provide the initial elements of a task-specific library of problem solving components. In order to move from problem specification to problem solving, a generic, i.e. taskindependent, model of problem solving as search is introduced, and instantiated in terms of the concepts in the relevant task ontology, say T. The result is a task-specific, but method-independent, problem solving model. This generic problem solving model provides the foundation from which alternative problem solving methods for a class of tasks can be defined. Specifically, the generic problem solving model provides i) a highly generic method ontology, say M; ii) a set of generic building blocks (generic tasks), which can be used to construct task-specific problem solving methods; and iii) an initial problem solving method, which can be characterized as the most generic problem solving method, which subscribes to M and is applicable to T. More specific problem solving methods can then be (re-)constructed from the generic problem solving model through a process of method/ontology specialization and method-to-task application. The resulting library of reusable components enjoys a clear theoretical basis and provides robust support for reuse. In the thesis I illustrate the approach in the area of parametric design.
59

Generating Members of a Software Product Line Using Combinatory Logic

Hoxha, Armend 04 May 2015 (has links)
A Product Line Family contains similar applications that differ only in the sets of sup-ported features from the family. To properly engineer these product lines, programmers design a common code base used by all members of the product line. The structure of this common code base is often an Object-Oriented (OO) framework, designed to contain the detailed domain-specific knowledge needed to implement these applications. However, these frameworks are often quite complex and implement detailed dynamic behavior with complex coordination among their classes. Extending an OO framework to realize a single product line instance is a unique exercise in OO programming. The ultimate goal is to develop a consistent approach, for managing all instances, which relies on configuration rather than programming. In this thesis, we show the novel application of Combinatory Logic to automatically syn-thesize correct product line members using higher-level code fragments specified by means of combinators. Using the same starting point of an OO framework, we show how to design a repository of combinators using FeatureIDE, an extensible framework for Feature-Oriented Software Development. We demonstrate a proof of concept using two different Java-based frameworks: a card solitaire framework and a multi-objective optimization algorithms framework. These case studies rely on LaunchPad, an Eclipse plugin developed at WPI that extends FeatureIDE. The broader impact of this work is that it enables framework designers to formally en-code the complex functional structure of an OO framework. Once this task is accomplished, then, generating product line instances becomes primarily a configuration process, which enables correct code to be generated by construction based on the combinatory logic.
60

The effect of polysaccharidic gums on activated carbon treatment of textile waste water /

Roy, Christian January 1976 (has links)
No description available.

Page generated in 0.0196 seconds