• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 191
  • 169
  • 69
  • 51
  • 44
  • 23
  • 17
  • 9
  • 9
  • 7
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 998
  • 212
  • 165
  • 151
  • 105
  • 83
  • 81
  • 80
  • 68
  • 68
  • 62
  • 60
  • 57
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Agrobench Sachsen

Schirrmacher, Mike, Penkalla, Uwe 04 May 2010 (has links) (PDF)
»Agrobench Sachsen« steht für eine einzelbetriebliche Stärken- und Schwächenanalyse landwirtschaftlicher Unternehmen. Für sächsische Unternehmen wurde eine Online-Datenbank entwickelt, um sich anhand des Jahresabschlusses mit anderen Unternehmen und den Richtwerten aus der Branche hinsichtlich der Produktivität, Liquidität, Rentabilität und Stabilität zu vergleichen. Die Nutzer können sich an den Bestwerten der Benchmark messen. Mit der neuen Online-Lösung wurden die bisherigen Methoden zur Betriebsbeurteilung in ihrer Aussagekraft und anschaulichen Darstellung erheblich verbessert.
212

Kalkylavvikelser : Aktiv påverkan i det dagliga arbetet i tillverkande företag / Calculated deviations : Active impact in the daily work in manufacturing companies

Filipsson, Amanda, Andersson, Elin January 2015 (has links)
Dagens samhälle präglas av ständiga förändringar i omvärlden. Internationalisering, globalisering och teknikutveckling är faktorer som leder till en ökad global konkurrens för företagen, vilket gjort att företag måste förbättra sina processer för att vara fortsatt konkurrenskraftiga och lönsamma. Det ständigt förändrade samhället kräver en väl genomförd ekonomistyrning för att företag ska kunna följa med i utvecklingen. Tillverkande företag är en stor del av det svenska näringslivet och måste därmed anpassa sig i den takt som globaliseringen sker. Många forskare har identifierat en kunskapslucka mellan ekonomistyrning i teori och praktik, då mycket forskning och utveckling av metoder inom ekonomistyrningen äger rum utan att man ser till hur detta uppfattas i praktiken.   Syftet med denna studie är att se vilka orsaker som kan förklara varför eventuella kalkylavvikelser uppstår i ett tillverkande företag inom fordonsindustrin i Sverige, samt hur dessa avvikelser kan påverkas och reduceras genom aktiv påverkan i det dagliga arbetet. Med intern benchmarking och Kaizen som verktyg vill vi öka kunskapen om dessa avvikelser och hur de kan påverkas, för att sedan reduceras och ge en bättre precision i förkalkylerna. Studien har genomförts med intervjuer, möten och observationer på två fallavdelningar i ett större tillverkande företag, för att skapa en djupare förståelse. Vidare har studien genomförts med ett kvalitativt tillvägagångssätt.   Resultaten vi funnit genom studien visar att möjligheten till att påverka de kalkylavvikelser som uppstår är stor. Möjligheten till aktiv påverkan skiljer sig beroende på vilken position i företaget respondenterna har. Studien visar att de flesta respondenter anser att tidsbrist är den största anledningen till att de inte använder sig mer av verktyg för påverkan såsom Kaizen och intern benchmarking. Vidare har vi funnit att medarbetare motiveras av mer än enbart monetär ersättning, såsom att få gehör och muntlig uppmuntran. Av studien kan konstateras att olika verktyg för effektiviseringar i teorin ofta framställs som oproblematiska och enkla att implementera, men i verkligheten och praktiken är det mer komplicerat. / Today´s society is characterized by constant changes in the business environment. Internationalization, globalization and development in technology are factors that lead to increased global competition between companies, why companies must improve their processes to remain competitive and profitable. The constantly changing society requires a well-implemented management control to keep up with the development, Manufacturing companies represents a big part of the Swedish industry. The companies must therefore adapt to the pace of globalization. Researchers have identified a gap in knowledge between management control in theory and practice. Research and development of methods for management control take place without ensuring how it is perceived in practice.   The purpose of this study is to define the reasons that explain why calculated deviations occur in the automobile industry in Sweden and how these deviations can be affected and reduced by active impact in the daily work. Our hope is to increase the knowledge about the deviations; how they can be affected, reduced and give a better precision in the calculations, with internal benchmarking and Kaizen as tools. The study was accomplished by interviews, meetings and observations of two departments in a large manufacturing company in order to create a deeper understanding. The study carried out with a qualitative approach.   The results we have found show that there is a great ability to affect the deviations. The opportunity for active impact differs depending on the respondent´s position in the company. The study shows that most respondents consider the lack of time as the main reason why they don´t use instruments for active impact, such as Kaizen and internal benchmarking. Furthermore we have found that employees are motivated by more than just monetary compensations, such as verbal encouragement and being noticed. The study concludes that different instruments for increasing efficiency in theory often is presented as unproblematic and easy to implement, but in reality and practice it is more complicated.
213

Efficient Extraction and Query Benchmarking of Wikipedia Data

Morsey, Mohamed 06 January 2014 (has links) (PDF)
Knowledge bases are playing an increasingly important role for integrating information between systems and over the Web. Today, most knowledge bases cover only specific domains, they are created by relatively small groups of knowledge engineers, and it is very cost intensive to keep them up-to-date as domains change. In parallel, Wikipedia has grown into one of the central knowledge sources of mankind and is maintained by thousands of contributors. The DBpedia (http://dbpedia.org) project makes use of this large collaboratively edited knowledge source by extracting structured content from it, interlinking it with other knowledge bases, and making the result publicly available. DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Furthermore, many companies and researchers use DBpedia and its public services to improve their applications and research approaches. However, the DBpedia release process is heavy-weight and the releases are sometimes based on several months old data. Hence, a strategy to keep DBpedia always in synchronization with Wikipedia is highly required. In this thesis we propose the DBpedia Live framework, which reads a continuous stream of updated Wikipedia articles, and processes it. DBpedia Live processes that stream on-the-fly to obtain RDF data and updates the DBpedia knowledge base with the newly extracted data. DBpedia Live also publishes the newly added/deleted facts in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Moreover, the new DBpedia Live framework incorporates several significant features, e.g. abstract extraction, ontology changes, and changesets publication. Basically, knowledge bases, including DBpedia, are stored in triplestores in order to facilitate accessing and querying their respective data. Furthermore, the triplestores constitute the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triplestore implementations. We introduce a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triplestores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triplestores and provide results for the popular triplestore implementations Virtuoso, Sesame, Apache Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triplestores is by far less homogeneous than suggested by previous benchmarks. Further, one of the crucial tasks when creating and maintaining knowledge bases is validating their facts and maintaining the quality of their inherent data. This task include several subtasks, and in thesis we address two of those major subtasks, specifically fact validation and provenance, and data quality The subtask fact validation and provenance aim at providing sources for these facts in order to ensure correctness and traceability of the provided knowledge This subtask is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents. We present DeFacto (Deep Fact Validation), which is an algorithm for validating facts by finding trustworthy sources for it on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of webpages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. On the other hand the subtask of data quality maintenance aims at evaluating and continuously improving the quality of data of the knowledge bases. We present a methodology for assessing the quality of knowledge bases’ data, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia.
214

An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in clouds

Jayasinghe, Indika D. 20 September 2013 (has links)
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments. Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
215

An Internal Benchmarking and Metrics (BM&M) Model for Industrial Construction Enterprise to Understand the Impact of Practices Implementation Level on Construction Productivity

Zhang, Di January 2014 (has links)
Construction productivity improvement is a key concern for construction companies and the industry. Productivity in construction is a complex issue because: (1) it is influenced by multiple factors interactively; and (2) it is measured in different forms and at different levels of detail for different purposes. This objective of this research is to develop an internal Benchmarking and Metrics (BM&M) model for industrial construction enterprises to help them understand and implement mechanisms for continuously improving construction productivity. Processes are developed in the model for: 1. Measuring and reporting craft labour productivity performance in a consistent form for the purposes of internal benchmarking and comparison with a selected third-party benchmark, 2. Examining productivity influencing factors in two categories with respect to construction environment factors and construction practices implementation, 3. Establishing a productivity performance evaluation model to understand the mechanisms by which the environment factors and construction practices impact construction productivity, and 4. Conducting strategic gaps analysis of construction practices implementation within a company aimed at achieving “best in class” and continuous improvement. System functions in the model are validated through functional demonstration by applying statistical analysis on data collected by the designed benchmarking process and metrics from an industrial construction company. It is concluded that the model developed can be effectively used to understand the impact of practices implementation levels on construction productivity.
216

Evaluation of the Configurable Architecture REPLICA with Emulated Shared Memory / Utvärdering av den konfigurerbara arkitekturen REPLICA med emulerat delat minne

Alnervik, Erik January 2014 (has links)
REPLICA is a family of novel scalable chip multiprocessors with configurable emulated shared memory architecture, whose computation model is based on the PRAM (Parallel Random Access Machine) model. The purpose of this thesis is to, by benchmarking different types of computation problems on REPLICA, similar parallel architectures (SB-PRAM and XMT) and more diverse ones (Xeon X5660 and Tesla M2050), evaluate how REPLICA is positioned among other existing architectures, both in performance and programming effort. But it should also examine if REPLICA is more suited for any special kinds of computational problems. By using some of the well known Berkeley dwarfs, and input from unbiased sources, such as The University of Florida Sparse Matrix Collection and Rodinia benchmark suite, we have made sure that the benchmarks measure relevant computation problems. We show that today’s parallel architectures have some performance issues for applications with irregular memory access patterns, which the REPLICA architecture can solve. For example, REPLICA only need to be clocked with a few MHz to match both Xeon X5660 and Tesla M2050 for the irregular memory access benchmark breadth first search. By comparing the efficiency of REPLICA to a CPU (Xeon X5660), we show that it is easier to program REPLICA efficiently than today’s multiprocessors. / REPLICA är en grupp av konfigurerbara multiprocessorer som med hjälp utav ett emulerat delat minne realiserar PRAM modellen. Syftet med denna avhandling är att genom benchmarking av olika beräkningsproblem på REPLICA, liknande (SB-PRAM och XMT) och mindre lika (Xeon X5660 och Tesla M2050) parallella arkitekturer, utvärdera hur REPLICA står sig mot andra befintliga arkitekturer. Både prestandamässigt och hur enkel arkitekturen är att programmera effektiv, men även försöka ta reda på om REPLICA är speciellt lämpad för några särskilda typer av beräkningsproblem. Genom att använda välkända Berkeley dwarfs applikationer och opartisk indata från bland annat The University of Florida Sparse Matrix Collection och Rodinia benchmark suite, säkerställer vi att det är relevanta beräkningsproblem som utförs och mäts. Vi visar att dagens parallella arkitekturer har problem med prestandan för applikationer med oregelbundna minnesaccessmönster, vilken REPLICA arkitekturen kan vara en lösning på. Till exempel, så behöver REPLICA endast vara klockad med några få MHz för att matcha både Xeon X5660 och Tesla M2050 för algoritmen breadth first search, vilken lider av just oregelbunden minnesåtkomst. Genom att jämföra effektiviteten för REPLICA gentemot en CPU (Xeon X5660), visar vi att det är lättare att programmera REPLICA effektivt än dagens multiprocessorer.
217

Benchmarking als Controlling-Instrument für die Kontraktlogistik : Prozessbenchmarking für Logistikdienstleister am Beispiel von Lagerdienstleistungen /

Krupp, Thomas. January 2006 (has links)
Nürnberg, Universiẗat, Diss., 2005 u.d.T.: Krupp, Thomas: Benchmarking als Controlling-Instrument für das Kontraktlogistikgeschäft der Logistkdienstleister - Einsatzmöglichkeiten und Potentiale des Prozessbenchmarking am Beispiel von Lagerdienstleistungen--Erlangen.
218

Erfolgsfaktoren der Logistik-Prozesskette als Analyse- und Gestaltungsinstrument für die Auftragsabwicklung von Industrieunternehmen : Prozess-Benchmarking in Deutschland, Japan und Südkorea am Beispiel der Schiffbauindustrie /

Schüssler, Uwe. January 1999 (has links)
Universiẗat, Diss.--St. Gallen, 1998. / Literaturverz. S. 365 - 388.
219

Purchasing performance measures and benchmarking : a case study of a lift company /

Lo, Tsuen-ying. January 1997 (has links)
Thesis (M.B.A.)--University of Hong Kong, 1997. / Includes bibliographical references.
220

Automatic assessment of OLAP exploration quality / Evaluation automatique de la qualité des explorations OLAP

Djedaini, Mahfoud 06 December 2017 (has links)
Avant l’arrivée du Big Data, la quantité de données contenues dans les bases de données était relativement faible et donc plutôt simple à analyser. Dans ce contexte, le principal défi dans ce domaine était d’optimiser le stockage des données, mais aussi et surtout le temps de réponse des Systèmes de Gestion de Bases de Données (SGBD). De nombreux benchmarks, notamment ceux du consortium TPC, ont été mis en place pour permettre l’évaluation des différents systèmes existants dans des conditions similaires. Cependant, l’arrivée de Big Data a complètement changé la situation, avec de plus en plus de données générées de jour en jour. Parallèlement à l’augmentation de la mémoire disponible, nous avons assisté à l’émergence de nouvelles méthodes de stockage basées sur des systèmes distribués tels que le système de fichiers HDFS utilisé notamment dans Hadoop pour couvrir les besoins de stockage technique et le traitement Big Data. L’augmentation du volume de données rend donc leur analyse beaucoup plus difficile. Dans ce contexte, il ne s’agit pas tant de mesurer la vitesse de récupération des données, mais plutôt de produire des séquences de requêtes cohérentes pour identifier rapidement les zones d’intérêt dans les données, ce qui permet d’analyser ces zones plus en profondeur, et d’extraire des informations permettant une prise de décision éclairée. / In a Big Data context, traditional data analysis is becoming more and more tedious. Many approaches have been designed and developed to support analysts in their exploration tasks. However, there is no automatic, unified method for evaluating the quality of support for these different approaches. Current benchmarks focus mainly on the evaluation of systems in terms of temporal, energy or financial performance. In this thesis, we propose a model, based on supervised automatic leaming methods, to evaluate the quality of an OLAP exploration. We use this model to build an evaluation benchmark of exploration support sys.terns, the general principle of which is to allow these systems to generate explorations and then to evaluate them through the explorations they produce.

Page generated in 0.0523 seconds