• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 475
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1160
  • 243
  • 174
  • 162
  • 159
  • 151
  • 144
  • 131
  • 108
  • 97
  • 97
  • 95
  • 87
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

O marketing e a abertura de capital

Perlin, Ricardo Scherer January 2010 (has links)
No atual contexto da economia brasileira a competição exige das empresas o conhecimento sobre o retorno de seus investimentos. Neste panorama o marketing historicamente encontrou dificuldades para se adequar à linguagem financeira e auferir o resultado de seus esforços. A discrepância entre métodos de mensuração afeta a credibilidade do marketing enquanto área de alocação de recursos dentro da companhia (RUST et al., 2004). Assim, presentemente, os gestores estão sob uma intensa pressão para justificarem seus gastos com marketing em uma economia de redução de custos (AMBLER; PUNTONI, 2003). Dessa forma, este estudo procura verificar o impacto dos investimentos de marketing no mercado acionário, viabilizado através da análise da oferta pública inicial (Inicial Public Offer- IPO), sob a ótica do underpricing (subprecificação) e do índice de demanda. O estudo faz uso de técnicas multivariadas como regressão linear, regressão logística e análise de variância (One way ANOVA). Os resultados sugerem que não há evidências empíricas do impacto dos investimentos de marketing, tanto no valor da subprecificação como no índice de demanda. Ademais, verifica-se que há influência significativa da atividade de estabilização do underwriter na subprecificação, bem como da reputação do underwriter e do financiamento anterior da companhia no indicador de demanda da oferta pública inicial. / In the current context of the Brazilian economy, competition requires that companies recognize the return on your investment. In this panorama, marketing historically found it difficult to fit the language of finance and obtain the result of you own efforts. This discrepancy between methods of measurement affects the credibility of marketing as an area of resource allocation within the company (RUST et al., 2004). Thus, at present, managers are under intense pressure to justify their marketing spending in an economy of cost reduction (AMBLER; PUNTONI, 2003). Thus this study seeks to verify the impact of marketing investments in the stock market, specifically at Initial Public Offer (IPO), in the perspective of underpricing and the index of demand. The study makes use of techniques such as multivariate linear regression, logistic regression and analysis of variance (One way ANOVA). The results suggest that there is no empirical evidence of the impact of marketing investments, both in the value of underpricing and in the demand’s index. Moreover, it appears that there is significant influence of the stabilization’s activity in underpricing, and underwriter's reputation and company’s previous funding in the demand’s index.
162

VEasy : a tool suite towards the functional verification challenges / VEasy: um conjunto de ferramentas direcionado aos desafios da verificação funcional

Pagliarini, Samuel Nascimento January 2011 (has links)
Esta dissertação descreve um conjunto de ferramentas, VEasy, o qual foi desenvolvido especificamente para auxiliar no processo de Verificação Funcional. VEasy contém quatro módulos principais, os quais realizam tarefas-chave do processo de verificação como linting, simulação, coleta/análise de cobertura e a geração de testcases. Cada módulo é comentado em detalhe ao longo dos capítulos. Todos os módulos são integrados e construídos utilizando uma Interface Gráfica. Esta interface possibilita o uso de uma metodologia de criação de testcases estruturados em camadas, onde é possível criar casos de teste complexos através do uso de operações do tipo drag-and-drop. A forma de uso dos módulos é exemplificada utilizando projetos simples escritos em Verilog. As funcionalidades da ferramenta, assim como o seu desempenho, são comparadas com algumas ferramentas comerciais e acadêmicas. Assim, algumas conclusões são apresentadas, mostrando que o tempo de simulação é consideravelmente menor quando efetuada a comparação com as ferramentas comerciais e acadêmicas. Os resultados também mostram que a metodologia é capaz de permitir um alto nível de automação no processo de criação de testcases através do modelo baseado em camadas. / This thesis describes a tool suite, VEasy, which was developed specifically for aiding the process of Functional Verification. VEasy contains four main modules that perform linting, simulation, coverage collection/analysis and testcase generation, which are considered key challenges of the process. Each of those modules is commented in details throughout the chapters. All the modules are integrated and built on top of a Graphical User Interface. This framework enables the testcase automation methodology which is based on layers, where one is capable of creating complex test scenarios using drag-anddrop operations. Whenever possible the usage of the modules is exemplified using simple Verilog designs. The capabilities of this tool and its performance were compared with some commercial and academic functional verification tools. Finally, some conclusions are drawn, showing that the overall simulation time is considerably smaller with respect to commercial and academic simulators. The results also show that the methodology is capable of enabling a great deal of testcase automation by using the layering scheme.
163

Proposta de um modelo para avaliação das relações causais entre métricas de modelos de avaliação de desempenho

Fiterman, Luciano January 2006 (has links)
Os indicadores de desempenho têm papel fundamental na gestão das organizações, pois mostram aos decisores a situação da organização e como ela se encontra em relação a seus objetivos. Entre os sistemas de indicadores de desempenho utilizado nas organizações, têm tido destaque a conjugação de métricas financeiras e não financeiras, baseada na crença de que a melhora nos resultados não financeiros irá ocasionar a melhora nos índices financeiros. Entretanto, não há uma metodologia consagrada para testar se esses relacionamentos (relações de causa-e-efeito) existem na realidade. O objetivo desse trabalho foi propor e validar parcialmente uma metodologia para testar e quantificar as relações causais entre indicadores de desempenho. A seqüência de passos foi definida a partir da literatura através da implementação de ferramentas do Desdobramento da Função Qualidade, Gerenciamento pelas Diretrizes, Pensamento Sistêmico e Ferramenta para Seleção de Planos de Ação. O método escolhido para sua validação parcial foi o estudo de caso. A unidade de análise foi uma organização que já utiliza métricas financeiras e não financeiras e possui base histórica de dados. A pesquisa utilizou como fontes de evidência a observação participante e entrevista estruturada. Para a análise dos dados foram utilizadas técnicas estatísticas e representação escrita. Os resultados permitem concluir que a metodologia consegue quantificar as relações causais entre métricas de desempenho. A aplicação também gerou grande aprendizado organizacional. A principal contribuição desse trabalho é o modelo conceitual parcialmente validado o qual pode ser utilizado para transformar o sistema de indicadores de desempenho em fonte de informações para a tomada de decisão através da quantificação das relações de causa-e-efeito. / Performance metrics have a fundamental role in organizations, because they show to decision makers the situation of the organization in relation to its objectives. Most of the metrics systems used have financial and non-financial indicators, based on the belief that if a non-financial performance is increased, it will cause the same behavior in financial results. On the other hand, there is not a consecrated methodology to test if these relationships (causal relations) exist in the real world. The objective of this paper is to propose and partially validate a methodology to test and quantify the causal relations among performance metrics. A sequence of steps was defined from literature research, using tools from Quality Function Deployment, Policy Deployment, System Dynamics and Tool for Action Planning Selection. The research method chosen was case study. The research unit was an organization that already uses financial and non-financial metrics and has historic data of it. As font of evidences, were used participant observation and structure interviews. Data analysis was made with statistical techniques and written representation. From the results, it can be concluded using the methodology it’s possible to quantify the causal relations between performance metrics. The application of this methodology also contributed the organizations learning. The mainly contribution of this paper is the partially validated conceptual methodology, that can be used to make the performance metric system a information source to decision making, trough the quantification of causal relations.
164

Rep-Index : uma abordagem abrangente e adaptável para identificar reputação acadêmica / Rep-Index : a comprehensive and adaptable approach to identify academic reputation

Cervi, Cristiano Roberto January 2013 (has links)
A tarefa de avaliar a produção científica de um pesquisador é fortemente baseada na análise de seu currículo. É o que fazem, por exemplo, as agências de fomento à pesquisa e desenvolvimento ou comissões de avaliação, quando necessitam considerar a produção científica dos pesquisadores no processo de concessão de bolsas e auxílios, na seleção de consultores e membros de comitês, na aprovação de projetos ou simplesmente para avaliar o conceito de um programa de pós-graduação. Nesse contexto, a modelagem de perfis de pesquisadores é tarefa fundamental, especialmente quando se quer avaliar a reputação dos pesquisadores. Isto pode ocorrer por meio de um processo de análise da trajetória de toda a carreira científica do pesquisador. Tal processo envolve não somente aspectos relacionados a artigos ou livros publicados, mas também por outros elementos inerentes à atividade de um pesquisador, como orientações de trabalhos de mestrado e de doutorado; participação em defesas de mestrado e de doutorado; trabalhos apresentados em conferências; participação em projetos de pesquisa, inserção internacional, dentre outros. O objetivo deste trabalho é especificar um modelo de perfil de pesquisadores (Rep- Model) e uma métrica para medir reputação acadêmica (Rep-Index). O processo de modelagem do perfil envolve a definição de quais informações são relevantes para a especificação do perfil e as apresenta por meio de 18 elementos e 5 categorias. O processo para medir a reputação do pesquisador é definido por uma métrica que gera um índice. Esse índice é calculado mediante a utilização dos elementos constantes no perfil do pesquisador. Para avaliar a abordagem proposta na tese, diversos experimentos foram realizados. Os experimentos envolveram a avaliação dos elementos do Rep-Model por meio de análise de correlação e por algoritmos de mineração de dados. O Rep-Index também foi avaliado e correlacionado com duas métricas amplamente utilizadas na comunidade científica, o h-index e o g-index. Como baseline, foram utilizados todos os pesquisadores do CNPq das áreas de Ciência da Computação, Economia e Odontologia. O trabalho desenvolvido nesta tese está inserido no contexto da identificação da reputação de pesquisadores no âmbito acadêmico. A abordagem desta tese tem como premissa ser abrangente e adaptável, pois envolve a vida científica do pesquisador construída ao longo de sua carreira científica e pode ser utilizada em diferentes áreas e em diferentes contextos. / The task of evaluating the scientific production of a researcher is based strongly on the analysis of their curriculum. It's what makes the agencies for research support or evaluation committees, when they need to consider the scientific production of researchers in the process of awarding grants and aid in the selection of consultants and committee members in approving projects or simply to assess the concept of a program graduate. In that context, the modeling of profiles of researchers is fundamental task especially when one wants to evaluate the reputation of the researchers. This can occur by means of a process of analysis of the trajectory of all the scientific career of the researcher. Such process involves not only aspects related to papers or books, but also other elements inherent in the activity of a researcher, as orientations of master’s degree and doctorate; participation in defense of master's and doctoral degrees; papers presented in conferences, participation in research projects, international integration, among others. This proposal specifies a profile template for researchers (Rep-Model) and a metric to measure academic reputation (Rep-Index). The profile modeling process involves define which information is relevant to the specification of the profile and shows through 18 elements and 5 categories. The process for measuring researcher's reputation is defined by a metric that generates an index. This index is calculated by using the information contained in the profile of the researcher. To evaluate the approach proposed in the thesis, extensive experiments were conducted. The experiments involved the evaluation of Rep-Model by means of correlation analysis and data mining algorithms. The Rep-Index was also evaluated and correlated with two metrics widely used in the scientific community, the h-index and gindex. As a baseline, all of CNPq researchers in the areas of Computer Science, Economics and Dentistry were used. The work in this thesis is set in the context of identifying the reputation of researchers within the academic sphere. The approach of this thesis is premised be comprehensive and adaptable, because it involves the life science researcher built throughout his scientific career and can be used in different research areas and in different contexts.
165

Metrics in Software Test Planning and Test Design Processes / Metrics in Software Test Planning and Test Design Processes

Afzal, Wasif January 2007 (has links)
Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes. / 00 92 51 4430327
166

Engagement of Developers in Open Source Projects : A Multi-Case Study

Chodapaneedi, Mani Teja, Manda, Samhith January 2017 (has links)
In the present world, the companies on using the open source projects have been tend to increase in the innovation and productivity which is beneficial in sustaining the competence. These involve various developers across the globe who may be contributing to several other projects, they constantly engage with the project to improve and uplift the overall project. In each open source project, the level of intensity and the motivation with which the developers engage and contribute vary among time. Initially the research is aimed to identify how the engagement and activity of the developers in open source projects vary over time. Secondly to assess the reasons over the variance in engagement activities of the developers involved in various open source projects. Firstly, a literature review was conducted to identify the list of available metrics that are helpful to analyse the developer’s engagement in open source projects. Secondly, we conducted a multi-case study, that involved the investigation of developer’s engagement in 10 different open source projects of Apache foundation. The GitHub repositories were mined to gather the data regarding the engagement activities of the developers over the selected projects. To identify the reasons for the variation in engagement and activity of developers, we analysed documentation about each project and also interviewed 10 developers and 5 instructors, who provided additional insights about the challenges faced to contribute in open source projects. The results of this research contain the list of factors that affect the developer’s engagement with open source projects which are extracted from the case studies and are strengthened through interviews. From the data that is collected by performing repository mining, the selected projects have been categorized with the increase, decrease activeness of developers among the selected projects. By utilizing the archival data that is collected from the selected projects, the factors corporate support, community involvement, distribution of issues and contributions to open source projects and specificity of guidelines have been identified as the crucial and key factors upon the success of the open source projects reflecting the engagement of contributors. In addition to this finding the insights on using open source projects are also collected from both perspectives of developers and instructors are presented.  This research had provided us a deeper insight on the working of open source projects and driving factors that influence engagement and activeness of the contributors. It has been evident from this research that the stated factors corporate support, community involvement, distribution of issues and contributions to open source projects and specificity of guidelines impacts the engagement and activeness of the developers. So, the open source projects minimally satisfying these projects can tend to see the increase of the engagement and activeness levels of the contributors. It also helps to seek the existing challenges and benefits upon contributing to open source projects from different perspectives.
167

LeAgile Measurement and Metrics : A Systematic Literature Review and Case Study

Katikireddy, Naga Durga Leela Praveera, Veereddy, Nidhi January 2017 (has links)
Context. Software engineers have been endeavouring to quantify software to obtain quantitative insights into its properties and quality since its inception. As of late, the use of Lean and Agile (LeAgile) methodologies is turning out to be progressively mainstream in the software industries. Managing software life-cycle tasks including planning, controlling and monitoring is primarily done by measurements. This is particularly valid in LeAgile organizations where these are day-to-day activities. On other words, dealing with agile development process like in any process, requires the collection of appropriate metrics to ensure visibility, inspection and adaptation as it is vital to know the effect of these methods and how product development and projects are performing. Are the goals being met? Are there any wastes? Is value being created? All of this is dependent on the ability to measure as correct and as objective as possible. Getting good metrics and interpreting them correctly is central in any product development organization. In agile approaches, the use of any metric needs to be clearly justified to decrease the amount of inefficient work done. This draws the need to discover metrics that are relevant to LeAgile methods to entail the benefits of measurement. Objectives. The main objective of this paper is to understand the current state-of-the-art and state-of-the-practice on the metrics usage in LeAgile methods. Additionally, to identify metrics that are suitable and have a high strength of evidence for their usage in the industries. Likewise, to construct a LeAgile measurement model based on the application of the metric’s context. Methods. This paper presents a two-step study; Firstly, a Systematic Literature Review (SLR) is conducted to present the state-of-the-art on using metrics in LeAgile Software Development. Second, to allow a better understanding of what measures are currently being used in collaboration between industry and academia, we have performed a case study at Telenor. Results. We found that metrics was mainly used to have an efficient flow of software development; to assess, track and improve product quality; for project planning and estimations; for project progress and tracking; to measure the teams and others. Additionally, we present the metrics that have compelling use and are worthy to be used in the industries. Conclusions. We conclude that traditional metrics or besides any metric can be used in an LeAgile context, provided they do not harm the agility of the process. This study identified 4 new metrics namely Business Value, Number of disturbance hours, Team Health check survey and Number of hours spent on IT divisions that are not present in the state-of-the-art. The gaps identified in the LeAgile measurement model built in this study can provide a roadmap for further research pertaining to the measurement model. Any of the topics identified as a means of completing the LeAgile measurement model developed in our study can be a fruitful area for future research.
168

Data Driven Visualization Tool for Game Telemetry

Engelbrektsson, Martin, Lilja, Marcus January 2017 (has links)
This thesis describes and evaluates the implementation of a telemetry tool prototype for the game engine Stingray. Arbitrary data can be chosen from a database and visualized in an interactive 3D viewport in the program. The implemented visualization method is a scatter plot with optional color mapping. A MongoDB server communicates with the editor via a native DLL written in C/C++ which in turn can send data to the rendering engine via JavaScript-Lua communication. Several implemented and continuously improved pipelines are discussed and analyzed throughout the report. The tool is designed to be data driven. Advantages and disadvantages of doing so are discussed. In the final chapter future improvements and ideas of expanding the plug-in are discussed.
169

Quality metrics in continuous delivery : A mixed approach

Jain, Aman, Aduri, Raghu ram January 2016 (has links)
Context. Continuous delivery deals with concept of deploying the user stories as soon as they are finished rather than waiting for the sprint to end. This concept increases the chances of early improvement to the software and provides the customer with a clear view of the final product that is expected from the software organization, but little research has been done on the quality of product developed and the ways to measure it. This research is conducted in the context of presenting a checklist of quality metrics that can be used by the practitioners to ensure good quality product delivery. Objectives. In this study, the authors strive towards the accomplishment of the following objectives: the first objective is to identify the quality metrics being used in agile approaches and continuous delivery by the organizations. The second objective is to evaluate the usefulness of the identified metrics, limitations of the metrics and identify new metrics. The final objective is to is to present and evaluate a solution i.e., checklist of metrics that can be used by practitioners to ensure quality of product developed using continuous delivery. Methods. To accomplish the objectives, the authors used mixture of approaches. First literature review was performed to identify the quality metrics being used in continuous delivery. Based on the data obtained from the literature review, the authors performed an online survey using a questionnaire posted over an online questionnaire hosting website. The online questionnaire was intended to find the usefulness of identified metrics, limitations of using metrics and also to identify new metrics based on the responses obtained for the online questionnaire. The authors conducted interviews and the interviews comprised of few close-ended questions and few open-ended questions which helped the authors to validate the usage of the metrics checklist. Results. Based on the LR performed at the start of the study, the authors obtained data regarding the background of continuous delivery, research performed over continuous delivery by various practitioners as well as a list of quality metrics used in continuous delivery. Later, the authors conducted an online survey using questionnaire that resulted in ranking the usefulness of quality metrics and identification of new metrics used in continuous delivery. Based on the data obtained from the online questionnaire, a checklist of quality metrics involved in continuous delivery was generated. Conclusions. Based on the interviews conducted to validate the checklist of metrics (generated as a result of the online questionnaire), the authors conclude that the checklist of metrics is fit for use in industry, but with some necessary changes made to the checklist based on the project requirements. The checklist will act as a reminder to the practitioners regarding the quality aspects that need to be measured during product development and maybe as a starting point while planning metrics that need to be measured during the project.
170

Performance analysis of suboptimal soft decision DS/BPSK receivers in pulsed noise and CW jamming utilizing jammer state information

Juntti, J. (Juhani) 17 June 2004 (has links)
Abstract The problem of receiving direct sequence (DS) spread spectrum, binary phase shift keyed (BPSK) information in pulsed noise and continuous wave (CW) jamming is studied in additive white noise. An automatic gain control is not modelled. The general system theory of receiver analysis is first presented and previous literature is reviewed. The study treats the problem of decision making after matched filter or integrate and dump demodulation. The decision methods have a great effect on system performance with pulsed jamming. The following receivers are compared: hard, soft, quantized soft, signal level based erasure, and chip combiner receivers. The analysis is done using a channel parameter D, and bit error upper bound. Simulations were done in original papers using a convolutionally coded DS/BPSK system. The simulations confirm that analytical results are valid. Final conclusions are based on analytical results. The analysis is done using a Chernoff upper bound and a union bound. The analysis is presented with pulsed noise and CW jamming. The same kinds of methods can also be used to analyse other jamming signals. The receivers are compared under pulsed noise and CW jamming along with white gaussian noise. The results show that noise jamming is more harmful than CW jamming and that a jammer should use a high pulse duty factor. If the jammer cannot optimise a pulse duty factor, a good robust choice is to use continuous time jamming. The best performance was achieved by the use of the chip combiner receiver. Just slightly worse was the quantized soft and signal level based erasure receivers. The hard decision receiver was clearly worse. The soft decision receiver without jammer state information was shown to be the most vulnerable to pulsed jamming. The chip combiner receiver is 3 dB worse than an optimum receiver (the soft decision receiver with perfect channel state information). If a simple implementation is required, the hard decision receiver should be used. If moderate complex implementation is allowed, the quantized soft decision receiver should be used. The signal level based erasure receiver does not give any remarkable improvement, so that it is not worth using, because it is more complex to implement. If receiver complexity is not limiting factor, the chip combiner receiver should be used. Uncoded DS/BPSK systems are vulnerable to jamming and a channel coding is an essential part of antijam communication system. Detecting the jamming and erasing jammed symbols in a channel decoder can remove the effect of pulsed jamming. The realization of erasure receivers is rather easy using current integrated circuit technology.

Page generated in 0.0441 seconds