• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Corporate annual reports (CARS) : accounting practices in transition

Cronje, C.J. (Christo Johannes) 26 November 2007 (has links)
The main goal of this thesis was to obtain an understanding of the way in which accounting practices that are constantly in transition generate the information that is disclosed in corporate annual reports (CARS). The study shows that CARS may be seen as a product of two main interrelated information processing systems, the first being the mandatory financial information system (MFIS) and the second the discretionary information system (DIS). The MFIS uses accounting practices such as generally accepted accounting principles (GAAP), which include International Financial Reporting Standards (IFRS), International Accounting Standards (IASs), JSE regulations and the Companies' Act requirements, in producing the information disclosed in CARS. The needs of users to reduce the uncertainty and risks in their decision making have an influence on the constantly evolving accounting practices. Standard-setting bodies play a major role in the development and refinement of GAAP. On the other hand, the DIS, in order to provide a complete picture of business entities, uses discretionary accounting practices to produce the contextual information contained in CARS. These discretionary accounting practices are also currently in transition. They cater for the production of information on the business environment, and provide an operating and financial review, overview of strategy, forward-looking information, key performance indicators and information on corporate governance and transparency. Standard-setting bodies may be able to use the contextual information contained in CARS to develop and refine the GAAP used by the MFIS. / Thesis (DComm(Accounting Sciences))--University of Pretoria, 2008. / Financial Management / DCom / unrestricted
22

Assessment of spectrum-based fault localization for practical use / Avaliação de localização de defeitos baseada em espectro para uso prático

Higor Amario de Souza 17 April 2018 (has links)
Debugging is one of the most time-consuming activities in software development. Several fault localization techniques have been proposed in the last years, aiming to reduce development costs. A promising approach, called Spectrum-based Fault localization (SFL), consists of techniques that provide a list of suspicious program elements (e.g., statements, basic blocks, methods) more likely to be faulty. Developers should inspect a suspiciousness list to search for faults. However, these fault localization techniques are not yet used in practice. These techniques are based on assumptions about the developer\'s behavior when inspecting such lists that may not hold in practice. A developer is supposed to inspect an SFL list from the most to the least suspicious program elements (e.g., statements) until reaching the faulty one. This assumption leads to some implications: the techniques are assessed only by the position of a bug in a list; a bug is deemed as found when the faulty element is reached. SFL techniques should pinpoint the faulty program elements among the first picks to be useful in practice. Most techniques use ranking metrics to assign suspiciousness values to program elements executed by the tests. These ranking metrics have presented similar modest results, which indicates the need for different strategies to improve the effectiveness of SFL. Moreover, most techniques use only control-flow spectra due to the high execution costs associated with other spectra, such as data-flow. Also, little research has investigated the use of SFL techniques by practitioners. Understanding how developers use SFL may help to clarify the theoretical assumptions about their behavior, which in turn can collaborate with the proposal of techniques more feasible for practical use. Therefore, user studies are a valuable tool for the development of the area. The goal of this thesis research was to propose strategies to improve spectrum-based fault localization, focusing on its practical use. This thesis presents the following contributions. First, we investigate strategies to provide contextual information for SFL. These strategies helped to reduce the amount of code to be inspected until reaching the faults. Second, we carried out a user study to understand how developers use SFL in practice. The results show that developers can benefit from SFL to locate bugs. Third, we explore the use of data-flow spectrum for SFL. Data-flow spectrum singles out faults significantly better than control-flow spectrum, improving the fault localization effectiveness. / Depuração é uma das atividades mais custosas durante o desenvolvimento de programas. Diversas técnicas de localização de defeitos têm sido propostas nos últimos anos com o objetivo de reduzir custos de desenvolvimento. Uma abordagem promissora, chamada Localização de Defeitos baseada em Espectro (LDE), é formada por técnicas que fornecem listas contendo elementos de código (comandos, blocos básicos, métodos) mais suspeitos de conter defeitos. Desenvolvedores deveriam inspecionar uma lista de suspeição para procurar por defeitos. No entanto, essas técnicas de localização de defeitos ainda não são usadas na prática. Essas técnicas baseiam-se em suposições sobre o comportamento de desenvolvedores durante a inspeção de tais listas que podem não ocorrer na prática. Um desenvolvedor supostamente inspeciona uma lista de LDE a partir do elemento mais suspeito para o menos suspeito até atingir o elemento defeituoso. Essa suposição leva a algumas implicações: as técnicas são avaliadas somente pela posição dos defeitos nas listas; um defeito é considerado como encontrado quando o elemento defeituoso é atingido. Técnicas de LDE deveriam posicionar os elementos de código defeituosos entre as primeiras posições para serem úteis na prática. A maioria das técnicas usa métricas de ranqueamento para atribuir valores de suspeição aos elementos executados pelos testes. Essas métricas de ranqueamento têm apresentado resultados semelhantes, o que indica a necessidade de estratégias diferentes para melhorar a eficácia de LDE. Além disso, a maioria das técnicas usa somente espectros de fluxo de controle devido ao alto custo de execução associado a outros espectros, tais como fluxo de dados. Também, poucas pesquisas têm investigado o uso de técnicas de LDE por programadores. Entender como desenvolvedores usam LDE pode ajudar a esclarecer as suposições teóricas sobre seu comportamento, o que por sua vez pode para colaborar para a proposição de técnicas mais viáveis para uso prático. Portanto, estudos com usuários são importantes para o desenvolvimento da área. O objetivo desta pesquisa de doutorado foi propor estratégias para melhorar a localização de defeitos baseada em espectro focando em seu uso prático. Esta tese apresenta as seguintes contribuições originais. Primeiro, nós investigamos estratégias para fornecer informação de contexto para LDE. Essas estratégias ajudaram a reduzir quantidade de código a ser inspecionado até atingir os defeitos. Segundo, nós realizamos um estudo com usuários para entender como desenvolvedores usam LDE na prática. Os resultados mostram que desenvolvedores podem beneficiar-se de LDE para localizar defeitos. Terceiro, nós exploramos o uso de espectros de fluxo de dados para LDE. Mostramos que o espectro de fluxo de dados seleciona defeitos significamente melhor que espectro de fluxo de controle, aumentando a eficácia de localização de defeitos.
23

Visibility of Performance

Pidun, Tim 03 June 2015 (has links) (PDF)
Die Versorgung mit adäquater Information ist eine der Hauptfunktionen von Performance Measurement-Systemen (PMS), gleichzeitig aber auch ihr größter Mangel und der Grund für das häufige Scheitern ihres Einsatzes in Unternehmen. Dabei gibt es derzeit keine Möglichkeit zu bestimmen, inwieweit ein eingesetzes PMS den Beteiligten auch tatsächlich gut und passgenau Informationen liefern kann. Diese Untersuchung geht von der Grundfrage aus, welche Informationen erhoben werden müssen, damit nicht nur die Darstellung der Performance selbst adressiert wird, sondern die auch für ein besseres Verständnis über das PMS und seine organisationale Verankerung und damit Akzeptanz und Nützlichkeit genutzt werden können. Sie folgt damit einem Verständnis von PMS als Informationssysteme, die für die adäquate Versorgung mit Domänenwissen sorgen müssen, und nicht lediglich als Controllinginstrumente, die Performance-Daten liefern sollen. Im Ergebnis steht die Entwicklung einer Theorie, die erklärt, weshalb das bisherige Problem des Scheiterns von PMS auftritt. Damit einhergehend wird der Indikator der Visibility of Performance konstruiert, der über eine einfache Anwendung aussagen kann, wie gut ein PMS bezüglich seiner Wissensversorgung für ein Unternehmen passt. Mithin zeigt er die Güte der performancerelevanten Informationsversorgung in einem PMS eines Unternehmens an. / The supply with adequate information is one of the main functions of Performance Measurement Systems (PMS), but also still one of its drawbacks and reason for their failure. Not only the collection of indicators is crucial, but also the stakeholders’ understanding of the about their meaning, purpose and contextual embedding. Today, companies are faced to seek for a PMS without a way to express the goodness of a solution, indicating its ability to deliver appropriate information and to address these demands. The goal of this investigation is to explore the mechanisms that drive information and knowledge supply in PMS in order to model a way to express this goodness. Using a grounded Theory approach, a theory of visibility of performance is developed, featuring a catalog of determinants for the goodness of PMS. Companies can conveniently use them to assess their PMS and to improve the visibility of their performance.
24

The role of social network sites in creating information value and social capital

Koroleva, Ksenia 02 November 2012 (has links)
Wenn die Nutzer Erfahrungen mit Sozialen Netzwerken sammeln: i) tauschen sie Informationen mit einander aus; ii) verbinden sich mit einander und bilden Netzwerke; und iii) können auf soziales Kapital zugreifen, das durch die Pflege von diesen Kontakten entsteht. Die Struktur dieser Dissertation spiegelt diese drei Besonderheiten wider. In dem ersten Kapitel untersuchen wir den Einfluss von Informationseigenschaften – den Umfang, die Tiefe, den Kontext als auch dem Feedback – auf den Informationsnutzen. Im zweiten Kapitel untersuchen wir die Netzwerk-Gestaltungsstrategien und die Beziehung von den resultierenden Netzwerkeigenschaften – die Beziehungsstärke als auch Netzwerküberschneidung – mit dem Informationsnutzen. Im dritten Kapitel erforschen wir den Einfluß von den gewonnenen Informationen und der Struktur des Netzwerkes – auf das Soziale Kapital. Zusätzlich beziehen wir in jedes Kapitel die Erfahrung der Nutzer mit dem Medium ein. Aufgrund von fehlenden Forschungserkenntnissen, setzen wir Grounded Theory ein, um konzeptionelle Verhaltensmodelle zu entwickeln. Diese Modelle werden im Anschluß empirisch getestet. Obwohl die Forschung in dieser Dissertation meist verhaltenswissenschaftlich ist, kann man auch Ansätze aus der Design Science erkennen. Zum Beispiel, spezielle Facebook-Anwendungen sind implementiert um reale Nutzerdaten zeitnah zu sammeln. Diese Dissertation weisst drei Hauptergebnisse auf. Erstens, die Beziehungsstärke ist der wichtigster Faktor, der das Verhalten von den Nuztern bestimmt. Zweitens, obwohl die Nutzer die Informationen von Ihren engen Freunden bevorzugen, andere Netzwerkeigenschaften sollten in Betracht gezogen werden, denn zum Beispiel Netzwerküberschneidung einen negativen Einfluss auf Informationsnutzen hat. Drittens, Erfahrungsfaktoren beinflussen das Nutzerverhalten auf diesen Netzwerken. / As SNS users gain experience with using SNS they: i) exchange the information with each other; ii) connect with each other and form certain network structures as a result; iii) obtain the social capital benefits due to the maintenance of relationships with others. The dissertation structure clearly reflects these peculiarities of SNS. Thus, in the first part of the dissertation we explore the impact of information characteristics – depth, breadth, context, social information – on the value of information users derive from their networks. In the second part of the dissertation we explore how users construct their networks and how properties of network structure – tie strength and network overlap – relate to information value. In the third part, we explore the impact of network structure and shared information in the process of social capital formation. We additionally control for the user experience, as we believe that this factor might impact the perception of value. Due to the scarcity of research findings we use explorative methodologies, such as Grounded Theory to study these new phenomena and generate conceptual models. These models are then verified empirically. Although most of the research presented in this dissertation is behavioral, we can also recognize design science elements. For example, we design and implement Facebook applications that allow to collect user data in real time. The main results of the dissertation can be summarized around three major contributions. First and foremost, the underlying tie strength emerges as the most important factor that drivers user behavior on SNS. Second, although people prefer information from their stronger ties, researchers should differentiate between different forms of network structure in their impact on information value, as, for example, network overlap has a negative relationship with information value. Third, experience factors mediate many of the behaviors of users on SNS.
25

Predição de mudanças conjuntas de artefatos de software com base em informações contextuais / Predicting co-changes of software artifacts based on contextual information

Wiese, Igor Scaliante 18 March 2016 (has links)
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software / Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
26

Filtrage de segments informatifs dans des vidéos / Informative segment filtering in video sequences

Guilmart, Christophe 20 December 2011 (has links)
Les travaux réalisés dans le cadre de cette thèse ont pour objectif d’extraire les différents segments informatifs au sein de séquences vidéo, plus particulièrement aériennes. L’interprétation manuelle de telles vidéos dans une optique de renseignement se heurte en effet au volume des données disponibles. Une assistance algorithmique fondée sur diverses modalités d’indexation est donc envisagée, dans l’objectif de repérer les "segments d’intérêt" et éviter un parcours intégral de la vidéo. Deux approches particulières ont été retenues et respectivement développées au sein de chaque partie. La partie 1 propose une utilisation des conditions de prise de vue (CPDV) comme modalités d’indexation. Une évaluation de la qualité image permet ainsi de filtrer les segments temporels de mauvaise qualité et donc inexploitables. La classification du mouvement image apparent directement lié au mouvement caméra, fournit une indexation de séquences vidéo en soulignant notamment les segments potentiels d’intérêt ou au contraire les segments difficiles présentant un mouvement très rapide ou oscillant. La partie 2 explore le contenu dynamique de la séquence vidéo, plus précisément la présence d’objets en mouvement. Une première approche locale en temps est présentée. Elle filtre les résultats d’une première classification par apprentissage supervisé en exploitant les informations de contexte, spatial puis sémantique. Différentes approches globales en temps sont par la suite explorées. De telles approches permettent de garantir la cohérence temporelle des résultats et réduire les fausses alarmes. / The objective of this thesis is to extract the informative temporal segments from video sequences, more particularly in aerial video. Manual interpretation of such videos for information gathering faces an ever growing volume of available data. We have thus considered an algorithmic assistance based on different modalities of indexation in order to locate "segments of interest" and avoid a complete visualization of the video. We have chosen two methods in particular and have respectively developed them in each part of this thesis. Part 1 describes how viewing conditions can be used as a method of indexation. The assessment of image quality enables to filter out the temporal segments for which the quality is low and which can thus not be exploited. The classification of global image motion, which is directly linked to camera motion, leads to a method of indexation for video sequences. Indeed, it emphasizes possible segments of interest or, conversely, difficult segments for which motion is very fast or oscillating. Part 2 focuses on the dynamic content of video sequences, especially the presence of moving objects. We first present a local (in time) approach. This approach refines the results obtained after a first classification by supervised learning by using contextual information, spatial then semantic information. We have then investigated several methods for moving object detection which are global in time. Such approaches aim to enforce the temporal consistency of the detected objects and to reduce false detections.
27

Predição de mudanças conjuntas de artefatos de software com base em informações contextuais / Predicting co-changes of software artifacts based on contextual information

Igor Scaliante Wiese 18 March 2016 (has links)
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software / Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
28

Visibility of Performance

Pidun, Tim 12 May 2015 (has links)
Die Versorgung mit adäquater Information ist eine der Hauptfunktionen von Performance Measurement-Systemen (PMS), gleichzeitig aber auch ihr größter Mangel und der Grund für das häufige Scheitern ihres Einsatzes in Unternehmen. Dabei gibt es derzeit keine Möglichkeit zu bestimmen, inwieweit ein eingesetzes PMS den Beteiligten auch tatsächlich gut und passgenau Informationen liefern kann. Diese Untersuchung geht von der Grundfrage aus, welche Informationen erhoben werden müssen, damit nicht nur die Darstellung der Performance selbst adressiert wird, sondern die auch für ein besseres Verständnis über das PMS und seine organisationale Verankerung und damit Akzeptanz und Nützlichkeit genutzt werden können. Sie folgt damit einem Verständnis von PMS als Informationssysteme, die für die adäquate Versorgung mit Domänenwissen sorgen müssen, und nicht lediglich als Controllinginstrumente, die Performance-Daten liefern sollen. Im Ergebnis steht die Entwicklung einer Theorie, die erklärt, weshalb das bisherige Problem des Scheiterns von PMS auftritt. Damit einhergehend wird der Indikator der Visibility of Performance konstruiert, der über eine einfache Anwendung aussagen kann, wie gut ein PMS bezüglich seiner Wissensversorgung für ein Unternehmen passt. Mithin zeigt er die Güte der performancerelevanten Informationsversorgung in einem PMS eines Unternehmens an.:Versicherung 5 Inhaltsverzeichnis 6 Abbildungsverzeichnis 8 Tabellenverzeichnis 10 Abkürzungsverzeichnis 11 Formelverzeichnis 13 1 Einleitung 14 1.1 Problemstellung 15 1.2 Zielsetzung 16 1.3 Aufbau der Arbeit 18 2 Grundlagen 21 2.1 Wissenschaftstheoretische Einordnung 21 2.1.1 Ontologische, epistemologische und methodologische Positionierung 21 2.1.2 Erkenntnistheoretische und methodische Spannungsfelder 25 2.1.2.1 Design Science und Behaviorismus 26 2.1.2.2 Design Science und der Theoriebegriff 29 2.1.3 Methodenpluralismus 30 2.1.3.1 Leitmethode Grounded Theory 30 2.1.3.2 Weitere verwendete Methoden 33 2.2 Fundamentale Konzepte und Theoriekontext 37 2.2.1 Performance und Performance Measurement 38 2.2.1.1 Konzeptualisierung von Leistung und Performance 38 2.2.1.2 Konzeptualisierung des Performance Measurement 41 2.2.1.3 Theorien und Modelle 44 2.2.2 Performance-Measurement-Systeme 46 2.2.2.1 Konzeptualisierung von Performance-Measurement-Systemen 46 2.2.2.2 Theorien und Modelle 50 2.2.3 Unternehmerisches Wissen 53 2.2.3.1 Konzeptualisierung von Wissen 53 2.2.3.2 Theorien und Modelle 54 2.3 Kritische Würdigung bestehender Konzepte 59 2.3.1 Anforderungen an ein PMS 60 2.3.1.1 Die Balanced Scorecard 63 2.3.1.2 Das Performance Prism 65 2.3.1.3 Die Performance Pyramid 67 2.3.2 Anforderungen an die Qualität von PMS 69 2.3.3 Zusammenfassung 72 2.3.4 Forschungsfragen 74 3 Arbeitsablauf 75 3.1 Aufbau der Schwerpunkte und Darstellung der Forschungsbeiträge 77 3.2 Erster Schwerpunkt: Indikatoren 78 3.2.1 Forschungsbeitrag 1 79 3.2.2 Forschungsbeitrag 2 80 3.2.3 Forschungsbeitrag 3 80 3.2.4 Zusammenfassung 81 3.3 Zweiter Schwerpunkt: PMS 83 3.3.1 Forschungsbeitrag 1 83 3.3.2 Forschungsbeitrag 2 83 3.3.3 Forschungsbeitrag 3 83 3.3.4 Forschungsbeitrag 4 84 3.3.5 Forschungsbeitrag 7 85 3.3.6 Zusammenfassung 86 3.4 Dritter Schwerpunkt: Kontextuelles Umfeld 87 3.4.1 Forschungsbeitrag 5 88 3.4.3 Forschungsbeitrag 6 89 3.4.4 Forschungsbeitrag 7 90 3.4.5 Zusammenfassung 90 3.5 Vierter Schwerpunkt: Visibility 91 3.5.1 Forschungsbeitrag 3 92 3.5.2 Forschungsbeitrag 4 93 3.5.3 Forschungsbeitrag 6 93 3.5.4 Forschungsbeitrag 8 94 3.5.5 Zusammenfassung 95 3.6 Zwischenstand: Darstellung der Theorie 97 3.6.1 Kausales Referenzmodell 98 3.6.2 Strukturelles Referenzmodell 99 3.6.3 Theorie und Hypothesen 100 3.6.4 Abduktiver Schluss 103 3.7 Fünfter Schwerpunkt: Operationalisierung 103 3.7.1 Forschungsbeitrag 5 104 3.7.2 Forschungsbeitrag 8 104 3.7.3 Forschungsbeitrag 9 110 3.7.4 Zusammenfassung 115 3.8 Sechster Schwerpunkt: Validierung 119 3.8.1 Forschungsbeitrag 8 119 3.8.2 Forschungsbeitrag 9 123 3.8.3 Zusammenfassung 124 3.9 Siebenter Schwerpunkt: Technologische Umsetzung 125 3.10 Vollständigkeitsbetrachtung 129 3.10.1 Thematischer Zusammenhang 130 3.10.2 Zusammenhang im Erkenntnisprozess 130 3.10.3 Individuelle Beiträge zum Methodenpluralismus und zur Diversität 132 4 Zusammenfassung 134 4.1 Diskussion der Ergebnisse 134 4.2 Einordnung bereits bestehender Konzepte 141 4.3 Nutzen für Forschung und Praxis 144 4.4 Generalisierung der Erkenntnisse und Ausblick 150 4.5 Vollständigkeitsbetrachtung 153 Literaturverzeichnis 155 Anhang Forschungsbeitrag 5 166 / The supply with adequate information is one of the main functions of Performance Measurement Systems (PMS), but also still one of its drawbacks and reason for their failure. Not only the collection of indicators is crucial, but also the stakeholders’ understanding of the about their meaning, purpose and contextual embedding. Today, companies are faced to seek for a PMS without a way to express the goodness of a solution, indicating its ability to deliver appropriate information and to address these demands. The goal of this investigation is to explore the mechanisms that drive information and knowledge supply in PMS in order to model a way to express this goodness. Using a grounded Theory approach, a theory of visibility of performance is developed, featuring a catalog of determinants for the goodness of PMS. Companies can conveniently use them to assess their PMS and to improve the visibility of their performance.:Versicherung 5 Inhaltsverzeichnis 6 Abbildungsverzeichnis 8 Tabellenverzeichnis 10 Abkürzungsverzeichnis 11 Formelverzeichnis 13 1 Einleitung 14 1.1 Problemstellung 15 1.2 Zielsetzung 16 1.3 Aufbau der Arbeit 18 2 Grundlagen 21 2.1 Wissenschaftstheoretische Einordnung 21 2.1.1 Ontologische, epistemologische und methodologische Positionierung 21 2.1.2 Erkenntnistheoretische und methodische Spannungsfelder 25 2.1.2.1 Design Science und Behaviorismus 26 2.1.2.2 Design Science und der Theoriebegriff 29 2.1.3 Methodenpluralismus 30 2.1.3.1 Leitmethode Grounded Theory 30 2.1.3.2 Weitere verwendete Methoden 33 2.2 Fundamentale Konzepte und Theoriekontext 37 2.2.1 Performance und Performance Measurement 38 2.2.1.1 Konzeptualisierung von Leistung und Performance 38 2.2.1.2 Konzeptualisierung des Performance Measurement 41 2.2.1.3 Theorien und Modelle 44 2.2.2 Performance-Measurement-Systeme 46 2.2.2.1 Konzeptualisierung von Performance-Measurement-Systemen 46 2.2.2.2 Theorien und Modelle 50 2.2.3 Unternehmerisches Wissen 53 2.2.3.1 Konzeptualisierung von Wissen 53 2.2.3.2 Theorien und Modelle 54 2.3 Kritische Würdigung bestehender Konzepte 59 2.3.1 Anforderungen an ein PMS 60 2.3.1.1 Die Balanced Scorecard 63 2.3.1.2 Das Performance Prism 65 2.3.1.3 Die Performance Pyramid 67 2.3.2 Anforderungen an die Qualität von PMS 69 2.3.3 Zusammenfassung 72 2.3.4 Forschungsfragen 74 3 Arbeitsablauf 75 3.1 Aufbau der Schwerpunkte und Darstellung der Forschungsbeiträge 77 3.2 Erster Schwerpunkt: Indikatoren 78 3.2.1 Forschungsbeitrag 1 79 3.2.2 Forschungsbeitrag 2 80 3.2.3 Forschungsbeitrag 3 80 3.2.4 Zusammenfassung 81 3.3 Zweiter Schwerpunkt: PMS 83 3.3.1 Forschungsbeitrag 1 83 3.3.2 Forschungsbeitrag 2 83 3.3.3 Forschungsbeitrag 3 83 3.3.4 Forschungsbeitrag 4 84 3.3.5 Forschungsbeitrag 7 85 3.3.6 Zusammenfassung 86 3.4 Dritter Schwerpunkt: Kontextuelles Umfeld 87 3.4.1 Forschungsbeitrag 5 88 3.4.3 Forschungsbeitrag 6 89 3.4.4 Forschungsbeitrag 7 90 3.4.5 Zusammenfassung 90 3.5 Vierter Schwerpunkt: Visibility 91 3.5.1 Forschungsbeitrag 3 92 3.5.2 Forschungsbeitrag 4 93 3.5.3 Forschungsbeitrag 6 93 3.5.4 Forschungsbeitrag 8 94 3.5.5 Zusammenfassung 95 3.6 Zwischenstand: Darstellung der Theorie 97 3.6.1 Kausales Referenzmodell 98 3.6.2 Strukturelles Referenzmodell 99 3.6.3 Theorie und Hypothesen 100 3.6.4 Abduktiver Schluss 103 3.7 Fünfter Schwerpunkt: Operationalisierung 103 3.7.1 Forschungsbeitrag 5 104 3.7.2 Forschungsbeitrag 8 104 3.7.3 Forschungsbeitrag 9 110 3.7.4 Zusammenfassung 115 3.8 Sechster Schwerpunkt: Validierung 119 3.8.1 Forschungsbeitrag 8 119 3.8.2 Forschungsbeitrag 9 123 3.8.3 Zusammenfassung 124 3.9 Siebenter Schwerpunkt: Technologische Umsetzung 125 3.10 Vollständigkeitsbetrachtung 129 3.10.1 Thematischer Zusammenhang 130 3.10.2 Zusammenhang im Erkenntnisprozess 130 3.10.3 Individuelle Beiträge zum Methodenpluralismus und zur Diversität 132 4 Zusammenfassung 134 4.1 Diskussion der Ergebnisse 134 4.2 Einordnung bereits bestehender Konzepte 141 4.3 Nutzen für Forschung und Praxis 144 4.4 Generalisierung der Erkenntnisse und Ausblick 150 4.5 Vollständigkeitsbetrachtung 153 Literaturverzeichnis 155 Anhang Forschungsbeitrag 5 166
29

Modulární architektura distribuovaných aplikací / Modular Architecture of Distributed Applications

Musil, Jiří January 2007 (has links)
Traditional architectures of software systems are in heterogenous environment of today's computer networks too heavy-footed. One of principles, which tries to solve this problem is service-oriented architecture (SOA). Practical way of its implementation is represented by web services (WS) built upon protocols like SOAP or XML-RPC. This diploma thesis focuses problem of providing contextual information to mobile devices and its solution based on SOA principles. The thesis introduces design and implementation of web service providing contextual information to mobile devices and prototype of modular inverse SOAP proxy server allowing its effective monitoring and management.
30

Object Detection via Contextual Information / Objektdetektion via Kontextuell Information

Stålebrink, Lovisa January 2022 (has links)
Using computer vision to automatically process and understand images is becoming increasingly popular. One frequently used technique in this area is object detection, where the goal is to both localize and classify objects in images. Today's detection models are accurate, but there is still room for improvement. Most models process objects independently and do not take any contextual information into account in the classification step. This thesis will therefore investigate if a performance improvement can be achieved by classifying all objects jointly with the use of contextual information. An architecture that has the ability to learn relationships of this type of information is the transformer. To investigate what performance that can be achieved, a new architecture is constructed where the classification step is replaced by a transformer block. The model is trained and evaluated on document images and shows promising results with a mAP score of 87.29. This value is compared to a mAP of 88.19, which was achieved by the object detector, Mask R-CNN, that the new model is built upon.  Although the proposed model did not improve the performance, it comes with some benefits worth exploring further. By using contextual information the proposed model can eliminate the need for Non-Maximum Suppression, which can be seen as a benefit since it removes one hand-crafted process. Another benefit is that the model tends to learn relatively quickly and a single pass over the dataset seems sufficient. The model, however, comes with some drawbacks, including a longer inference time due to the increase in model parameters. The model predictions are also less secure than for Mask R-CNN. With some further investigation and optimization, these drawbacks could be reduced and the performance of the model be improved.

Page generated in 0.1769 seconds