• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 189
  • 50
  • 48
  • 29
  • 24
  • 19
  • 16
  • 13
  • 11
  • 10
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 779
  • 197
  • 131
  • 118
  • 107
  • 93
  • 91
  • 88
  • 82
  • 81
  • 79
  • 77
  • 76
  • 70
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

Systém pro kontinuální integraci projektu k-Wave / Continuous Integration System for the k-Wave Project

Nečas, Radek January 2016 (has links)
The main goal of this thesis is to describe the implementation of continuous integration into the k-Wave project. The thesis focuses primarily on the version written in the C/C++ language with the usage of the OpenMP library which typically runs on supercomputers. Accordingly, many of popular workflows and approaches ought to be adapted, a few more created. The outcome of the thesis is a complete solution with real and practical usage. The author provides design, tools selection, runtime environment administration and configuration for each one of the used services. Software implementation of the basic framework is used in order to utilize running tests on the supercomputers. Furthermore, the implementation of chosen types of regression and unit tests are performed. Realisation is based on Gitlab and Jenkis services that are running on separated Docker containers.
772

Vernetzt planen und produzieren VPP 2006 : Tagungsband Chemnitz 14. und 15. September 2006: Vernetzt planen und produzieren VPP 2006 : Tagungsband Chemnitz14. und 15. September 2006

Sammelband mehrerer Autoren 05 July 2007 (has links)
Vor dem Hintergrund sich immer schneller und stärker wandelnder Marktbedingungen gelten Netzwerke als die Unternehmensform des 21. Jahrhunderts. Sie erlauben insbesondere kleinen und mittelständischen Unternehmen die Erhaltung und Erhöhung ihrer Wettbewerbsfähigkeit durch gezielte Kooperation und Bündelung ihrer Kompetenzen. Unternehmen benötigen dafür entsprechende Methoden und Instrumentarien. Diese stehen neben Theorien und Modellen im Mittelpunkt der wissenschaftlichen Arbeiten des Sonderforschungsbereiches (SFB) 457 „Hierarchielose regionale Produktionsnetze“ an der Technischen Universität Chemnitz. Zum nunmehr fünften Male findet am 14. und 15. September 2006 die Fachtagung „Vernetzt planen und produzieren – VPP 2006“ statt. Es werden auch in diesem Jahr die aktuellen Ergebnisse des SFB 457, weiterer nationaler und internationaler Forschungsarbeiten und -projekte auf dem Gebiet der Netzwerkforschung sowie Erkenntnisse und Erfahrungen der praktischen Umsetzung durch die Industrie von Wissenschaftlern und Praktikern vorgestellt und diskutiert. Als Referenten der Plenarveranstaltung werden Herr Prof. Kuhn von der Universität Dortmund, Herr Prof. Westkämper von der Universität Stuttgart, Herr Prof. Herzog von der Universität Bremen, Herr Prof. Nyhuis von der Universität Hannover und Herr Prof. Smirnov von der Russischen Akademie der Wissenschaften Sankt Petersburg in ihren Beiträgen verschiedene Aspekte von Netzwerken thematisieren. Neben den schon traditionellen Workshops zu verschiedenen Themenbereichen des Bildens und Betreibens von Netzwerken wird in diesem Jahr ein zusätzlicher Workshop zum Thema „Netzwerke und Cluster in der brasilianisch-deutschen Zusammenarbeit“ stattfinden. Dieser ist Teil des Besuches einer Delegation aus Vertretern der Regierung und Wirtschaft des brasilianischen Bundesstaates Bahia, die gemeinsam mit Vertretern des Bundesministeriums für Bildung und Forschung sowie des Sächsischen Ministeriums für Wirtschaft und Arbeit an der Tagung teilnehmen werden, um neue Kontakte zu schließen und weitere Kooperationsvorhaben zu initiieren. Die Tagung „Vernetzt planen und produzieren – VPP 2006“ ist gleichzeitig Abschlusskolloquium des SFB 457, welcher nach sieben Jahren intensiver und erfolgreicher Netzforschung dieses Jahr endet. Maßgeblichen Anteil am SFB 457 hatten die ehemaligen Sprecher Prof. Siegfried Wirth und Prof. Hartmut Enderlein.
773

Feature-based configuration management of reconfigurable cloud applications

Schroeter, Julia 11 April 2014 (has links)
A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality.
774

SemProj: Ein Semantic Web – basiertes System zur Unterstützung von Workflow- und Projektmanagement

Langer, André 26 March 2008 (has links)
Mit mehr als 120 Millionen registrierten Internetadressen (Stand: März 2007) symbolisiert das Internet heutzutage das größte Informationsmedium unserer Zeit. Täglich wächst das Internet um eine unüberschaubare Menge an Informationen. Diese Informationen sind häufig in Dokumenten hinterlegt, welche zur Auszeichnung die Hypertext Markup Language verwenden. Seit Beginn der Neunziger Jahre hat sich dieses System bewährt, da dadurch der einzelne Nutzer in die Lage versetzt wird, auf einfache und effiziente Weise Dokumentinhalte mit Darstellungsanweisungen zu versehen und diese eigenständig im Internet zu veröffentlichen. Diese Layoutinformationen können bei Abruf der entsprechenden Ressource durch ein Computerprogramm leicht ausgewertet und zur Darstellung der Inhalte genutzt werden. Obwohl sowohl die Layoutinformationen als auch die eigentlichen Dokumentinhalte in einem textuellen Format vorliegen, konnten die Nutzertextinhalte durch eine Maschine bisher nur sehr eingeschränkt verarbeitet werden. Während es menschlichen Nutzern keinerlei Probleme bereitet, die Bedeutung einzelner Texte auf einer Webseite zu identifizieren, stellen diese für einen Rechner prinzipiell nur eine Aneinanderreihung von ASCII-Zeichen dar. Sobald es möglich werden würde, die Bedeutung von Informationen durch ein Computerprogramm effizient zu erfassen und weiterzuverarbeiten, wären völlig neue Anwendungen mit qualitativ hochwertigeren Ergebnissen im weltweiten Datennetz möglich. Nutzer könnten Anfragen an spezielle Agenten stellen, welche sich selbstständig auf die Suche nach passenden Resultaten begeben; Informationen verschiedener Informationsquellen könnten nicht nur auf semantischer Ebene verknüpft, sondern daraus sogar neue, nicht explizit enthaltene Informationen abgeleitet werden. Ansätze dazu, wie Dokumente mit semantischen Metadaten versehen werden können, gibt es bereits seit einiger Zeit. Lange umfasste dies jedoch die redundante Bereitstellung der Informationen in einem eigenen Dokumentenformat, weswegen sich keines der Konzepte bis in den Privatbereich durchsetzen konnte und als Endkonsequenz in den vergangenen Monaten besonderes Forschungsinteresse darin aufkam, Möglichkeiten zu finden, wie semantische Informationen ohne großen Zusatzaufwand direkt in bestehende HTML-Dokumente eingebettet werden können. Die vorliegende Diplomarbeit möchte diese neuen Möglichkeiten im Bereich des kollaborativen Arbeitens näher untersuchen. Ziel ist es dazu, eine Webapplikation zur Abwicklung typischer Projektmanagement-Aufgaben zu entwickeln, welche jegliche Informationen unter einem semantischen Gesichtspunkt analysieren, aufbereiten und weiterverarbeiten kann und unabhängig von der konkreten Anwendungsdomain und Plattform systemübergreifend eingesetzt werden kann. Die Konzepte Microformats und RDFa werden dabei besonders herausgestellt und nach Schwächen und zukünftigen Potentialen hin untersucht. / The World Wide Web supposably symbolizes with currently more than 120 million registered internet domains (March 2007) the most comprehensive information reference of all times. The amount of information available increases by a storming bulk of data ever day. Those information is often embedded in documents which utilize the Hypertext Markup Language. This enables the user to mark out certain layout properties of a text in an easy and efficient fashion and to publish the final document containing both layout and data information. A computer application is then able to extract style information from the document resource and to use it in order to render the resulting website. Although layout information and data are both equally represented in a textual manner, a machine was hardly capable of processing user content so far. Whereas human consumers have no problem to identify and understand the sense of several paragraphs on a website, they basically represent only a concatenation of ASCII characters for a machine. If it were possible to efficiently disclose the sense of a word or phrase to a computer program in order to process it, new astounding applications with output results of high quality would be possible. Users could create queries for specialized agents which autonomously start to search the web for adequate result matches. Moreover, the data of multiple information sources could be linked and processed together on a semantic level so that above all new, not explicitly stated information could be inferred. Approaches already exist, how documents could be enhanced with semantic metadata, however, many of these involve the redundant provision of those information in a specialized document format. As a consequence none of these concepts succeeded in becoming a widely used method and research started again to find possibilities how to embed semantic annotations without huge additional efforts in an ordinary HTML document. The present thesis focuses on an analysis of these new concepts and possibilities in the area of collaborative work. The objective is to develop the prototype of a web application with which it is possible to manage typical challenges in the realm of project and workflow management. Any information available should be processable under a semantic viewpoint which includes analysis, conditioning and reuse independently from a specific application domain and a certain system platform. Microformats and RDFa are two of those relatively new concepts which enable an application to extract semantic information from a document resource and are therefore particularly exposed and compared with respect to advantages and disadvantages in the context of a “Semantic Web”.
775

Lean production in the aggregate stone industry: The road to becoming more sustainable and productive : A case study for a Swedish construction company

Abrahamsson, Ludwig, Ramsten, Oscar January 2022 (has links)
The aggregate stone industry produces stone – a critical material for the construction industry value chain, and is, thus, a valuable resource for building materials, highlighting the society’s need for critical products. The stone is mined and refined in so-called quarries. Yet, the industry is classified as an environmentally hazardous activity as it has traditional ways of consuming fossil fuels. Given that the industry is hazardous, the aggregate stone business needs to become more sustainable.  This master's thesis was carried out at a large Swedish construction company and the aggregate stone industry. Where perceptions of challenges around sustainability, and productivity were studied, these three pillars have permeated the work. Due to the hazardous industry, there is a need to reconcile and implement related tensions concerning Lean principles and circular economy in the traditional industries as construction. Because sustainability is the praxis in today's society. Since the process changes have a significant role in Lean, which strives to create flows and minimise waste of resources through various efficiencies. In order to make the industry more sustainable, the circular economy is an approach for maintaining a favourable environment and reducing waste, as well as adopting useful mechanisms and processes as recycling and reuse. This study focuses on the economic and environmental aspects of sustainability. Based on the study results, the aggregate stone industry should adapt its organisation to work towards the Lean principles and circular economy to be more favourable in these two aspects. Data were collected through interviews with experts in the aggregate stone industry and through observations at selected facilities. Production and workflow in the industry should be prioritised to focus on developing the six themes examined across the value-chain: location, machinery, resources, communication and information, the employees and finally, the end-user. Increased awareness and knowledge in these areas and sustainability would mean better conditions for the companies to invest in financial and environmental resources to achieve a competitive advantage – a leading role in the industry. / Stenindustrin producerar sten – vilket är ett kritiskt material för byggbranschens värdekedja, och är därmed en värdefull resurs för byggmaterial som lyfter fram samhällets behov av kritiska produkter. Stenen bryts och förädlas i så kallade stenbrott. Trots allt klassas branschen som en miljöfarlig verksamhet eftersom den har traditionella sätt att konsumera fossila bränslen, vilket leder till att stenverksamheten behöver bli mer hållbar. Detta examensarbete har genomförts på ett stort svenskt byggföretag och inom stenindustrin. Där uppfattningar om utmaningar kring hållbarhet och produktivitet har studerats vilket har genomsyrat arbetet. I och med den miljöfarliga branschen finns det ett behov av att förena och implementera förändringar relaterade till teorier. Teorierna Lean och cirkulär ekonomi kan anses som ett traditionellt sätt för att motverka den miljöfarliga branschen som bygg- och anläggningsbranschen upprätthåller, då hållbarhet är en praxis i dagens samhälle. Givet de betraktade teorier, har processförändringar en betydande roll inom Lean, som strävar efter att skapa flöden och minimera slöseri med resurser genom olika effektiviseringar. För att göra branschen mer hållbar är den cirkulära ekonomin ett tillvägagångssätt för att upprätthålla en gynnsam miljö och minska avfallet, samt upprätthålla användbara mekanismer och processer som återvinning och återanvändning. Denna studie fokuserar på de ekonomiska och miljömässiga aspekterna inom hållbarhet. Baserat på studieresultaten bör stenmaterialsindustrin anpassa sin organisation för att arbeta mot Lean-principerna och den cirkulära ekonomin för att vara mer gynnsam i dessa två aspekter. Data samlades in genom intervjuer med experter inom bergtäktsindustrin och genom observationer på utvalda anläggningar. Produktion och förvaltning bör prioriteras i branschen för att fokusera på att utveckla de sex teman som granskas över hela värdekedjan: plats, maskiner, resurser, kommunikation och information, de anställda och slutligen slutanvändaren. Ökad medvetenhet och kunskap inom dessa områden och hållbarhet skulle innebära bättre förutsättningar för företagen att investera i ekonomiska och miljömässiga resurser för att uppnå en konkurrensfördel– en ledande roll inom branschen.
776

CyberWater: An open framework for data and model integration

Ranran Chen (18423792) 03 June 2024 (has links)
<p dir="ltr">Workflow management systems (WMSs) are commonly used to organize/automate sequences of tasks as workflows to accelerate scientific discoveries. During complex workflow modeling, a local interactive workflow environment is desirable, as users usually rely on their rich, local environments for fast prototyping and refinements before they consider using more powerful computing resources.</p><p dir="ltr">This dissertation delves into the innovative development of the CyberWater framework based on Workflow Management Systems (WMSs). Against the backdrop of data-intensive and complex models, CyberWater exemplifies the transition of intricate data into insightful and actionable knowledge and introduces the nuanced architecture of CyberWater, particularly focusing on its adaptation and enhancement from the VisTrails system. It highlights the significance of control and data flow mechanisms and the introduction of new data formats for effective data processing within the CyberWater framework.</p><p dir="ltr">This study presents an in-depth analysis of the design and implementation of Generic Model Agent Toolkits. The discussion centers on template-based component mechanisms and the integration with popular platforms, while emphasizing the toolkit’s ability to facilitate on-demand access to High-Performance Computing resources for large-scale data handling. Besides, the development of an asynchronously controlled workflow within CyberWater is also explored. This innovative approach enhances computational performance by optimizing pipeline-level parallelism and allows for on-demand submissions of HPC jobs, significantly improving the efficiency of data processing.</p><p dir="ltr">A comprehensive methodology for model-driven development and Python code integration within the CyberWater framework and innovative applications of GPT models for automated data retrieval are introduced in this research as well. It examines the implementation of Git Actions for system automation in data retrieval processes and discusses the transformation of raw data into a compatible format, enhancing the adaptability and reliability of the data retrieval component in the adaptive generic model agent toolkit component.</p><p dir="ltr">For the development and maintenance of software within the CyberWater framework, the use of tools like GitHub for version control and outlining automated processes has been applied for software updates and error reporting. Except that, the user data collection also emphasizes the role of the CyberWater Server in these processes.</p><p dir="ltr">In conclusion, this dissertation presents our comprehensive work on the CyberWater framework's advancements, setting new standards in scientific workflow management and demonstrating how technological innovation can significantly elevate the process of scientific discovery.</p>
777

Migrating from integrated library systems to library services platforms : An exploratory qualitative study for the implications on academic libraries’ workflows

Grammenis, Efstratios, Mourikis, Antonios January 2018 (has links)
The present master thesis is an exploratory qualitative study in academic libraries regarding the transition from the integrated library systems to the next generation integrated library systems or library services platforms and the potential implications in their internal workflows. Nowadays, libraries all over the world are facing up with a number of challenges in terms of acquiring, describing and making available to the public all the resources, both printed and electronic, they manage. In particular, the academic libraries have more reasons to wish to fulfill their users’ needs since the majority of them use the library sources more and more for scientific research and educational purposes.In this study we attempt to explore the phenomenon in the globe using the available literature and to identify the implications in libraries’ workflows and the possible future developments. Moreover, through observation and semi-structured interviews we try to identify the current developments in the Greek context regarding the adoption of next ILS and possible implications in their workflows. Finally, we attempt a comparison between the Greek situation and the international one.
778

Energy-aware scheduling : complexity and algorithms / Ordonnancement sous contrainte d'énergie : complexité et algorithmes

Renaud-Goud, Paul 05 July 2012 (has links)
Dans cette thèse, nous nous sommes intéressés à des problèmes d'ordonnancement sous contrainte d'énergie, puisque la réduction de l'énergie est devenue une nécessité, tant sur le plan économique qu'écologique. Dans le premier chapitre, nous exhibons des bornes strictes sur l'énergie d'un algorithme classique qui minimise le temps d'exécution de tâches indépendantes. Dans le second chapitre, nous ordonnançons plusieurs applications chaînées de type « streaming », et nous étudions des problèmes contraignant l'énergie, la période et la latence. Nous effectuons une étude de complexité exhaustive, et décrivons les performances de nouvelles heuristiques. Dans le troisième chapitre, nous étudions le problème de placement de répliques dans un réseau arborescent. Nous nous plaçons dans un cadre dynamique, et nous bornons à minimiser l'énergie. Après une étude de complexité, nous confirmons la qualité de nos heuristiques grâce à un jeu complet de simulations. Dans le quatrième chapitre, nous revenons aux applications « streaming », mais sous forme de graphes série-parallèles, et nous tentons de les placer sur un processeur multi-cœur. La découverte d'un algorithme polynomial sur un problème simple nous permet la conception d'heuristiques sur le problème le plus général dont nous avons établi la NP-complétude. Dans le cinquième chapitre, nous étudions des bornes énergétiques de politiques de routage dans des processeurs multi-cœurs, en comparaison avec le routage classique XY, et développons de nouvheuristiques de routage. Dans le dernier chapitre, nous étudions expérimentalement le placement d'applications sous forme de DAG sur des machines réelles. / In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
779

Demography of Birch Populations across Scandinavia

Sendrowski, Janek January 2022 (has links)
Boreal forests are particularly vulnerable to climate change, experiencing a much more drastic increase in temperatures and having a limited amount of more northern refugia. The trees making up these vast and important ecosystems already had to adapt previously to environmental pressures brought about by the repeated glaciations during past ice ages. Studying the patterns of adaption of these trees can thus provide valuable insights on how to mitigate future damage. This thesis presents and analyses population structure, demo- graphic history and the distribution of fitness effects (DFE) of the diploid Betula pendula and tetraploid B. pubescens across Scandinavia. Birches–being widespread in boreal forests as well as having great economical importance–constitute superb model species. The analyses of this work confirm the expectations on postglacial population expansion and diploid-tetraploid introgression. They furthermore ascertain the presence of two genetic clusters and a remarkably similar DFE for the species. This work also contributes with a transparent, reproducible and reusable pipeline which facilitates running similar analyses for related species.

Page generated in 0.0222 seconds