• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 83
  • 42
  • 32
  • 26
  • 9
  • 6
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 342
  • 42
  • 41
  • 37
  • 35
  • 32
  • 32
  • 31
  • 30
  • 26
  • 25
  • 23
  • 23
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Utvärdering av ny teknologi för utveckling av gränssnitt på toppen av SAP / Evaluation of New Technologies for the Development of Interfaces on top of SAP

Apelqvist, Morgan January 2013 (has links)
Denna rapport vänder sig till de som har ett affärs- eller/och teknikintresse i utvecklingen av gränssnitt till affärssystem från SAP. Det har skett en omfattande förändring de senaste åren med hur vi alla dagligen använder oss av internet och mobila enheter. Vi har alla vant oss vid snygga, användarvänliga och flexibla gränssnitt. Vi gör våra bankärenden och använder sociala medier från vår smartmobil. Vi tar sedan för givet att vi också lika lätt ska kunna använda dessa gränssnitt på vår läsplatta. Denna beteendeförändring har även hänt hos användarna av gränssnitt till SAP:s affärssystem. De tar därför för givet att gränssnitten ska vara lika intuitiva, användarvänliga och visas lika lätt på sina smartmobiler och läsplattor som på deras bärbara datorer. Dock så har gränssnitt till SAP:s affärssystem begränsade möjligheter att användas på mobila enheter. Även gränssnittens utseende och användarvänlighet har haft begränsningar. SAP har insett att användarna inte kommer att acceptera gränssnittens begränsningar länge till och har därför utvecklat en ny teknologi som de kallar SAPUI5. SAP påstår att denna teknologi löser användarnas nya behov. På uppdrag av Claremont AB har jag fått i uppgift att undersöka om SAPUI5 verkligen löser användarnas nya behov och att ställa den nya teknologin mot väggen. Jag har i undersökningen byggt tre olika gränssnitt som motsvarar de viktigaste nya behoven som användarna har. Med de gränssnitt jag utvecklat har jag visat att SAPUI5 är en mycket kompetent teknologi. Den kan mycket väl användas för att utveckla framtidssäkra användarvänliga gränssnitt som både kan användas på stationära samt mobila enheter. För att kunna sätta mig in i hur SAPUI5 skiljer sig från de befintliga teknologierna så gjorde jag en kvalitativ intervju med två seniora SAP-utvecklare som använder de äldre teknologierna. Resultatet av intervjun använde jag sedan till att jämföra de äldre teknologierna med den nya. Jämförelsen bekräftade att SAPUI5 är en teknologi som löser många av de behov som de äldre teknologierna har begränsningar med. Dessutom kom jag fram till att om du står i valet och kvalet att välja en av dessa teknologier gör du rätt i att grundligt undersöka vilka behov som ställs på det nya gränssnittet. De olika teknologierna löser olika typer av användares behov och det är därför väsentligt att sätta sig in i vad användarna behöver kunna utföra med hjälp av det nya gränssnittet. / This report is aimed at those who have a business or/and technical interest in the development of interfaces to Enterprise Resource Planning (ERP) systems from SAP. There has been a substantial change in recent years with how we daily make use of the internet and mobile devices. We have all become accustomed to sleek, user-friendly and flexible interface. We make our banking and use social media on our smartphone. We then take for granted that we just as easily can use these interfaces in our tablet. This behavioral change has also happened among the users of interfaces to SAP ERP systems. The users take for granted that the interfaces should be as intuitive, user-friendly and can be used as easily on their smartphones and tablets as their laptops. However, the current interfaces to SAP ERP systems have limited possibilities to be used on mobile devices. Even the interfaces look and ease of use have had its limitations. SAP has recognized that users will not accept the limitations of the interfaces any longer and have therefore developed a new technology that they call SAPUI5. SAP claims that this technology solves the user’s new needs. On behalf Claremont AB, I have been asked to investigate whether SAPUI5 really solve the user’s emerging needs and to nail down the new technology. I have in the study built three different interfaces that correspond to the new key requirements of the users. With the interface I developed, I have shown that SAPUI5 is a highly competent technology. It can very well be used to develop future-proof user-friendly interfaces that can be used both on desktop and mobile devices. To research how SAPUI5 differs from the existing technologies I made a qualitative interview with two senior SAP developers using the older technologies. I used the result of the interview to compare the old with the new technologies. The comparison confirmed that SAPUI5 is a technology that solves many of the needs the older technologies have limitations with. Moreover, I came to the conclusion that if you are going to choose one of these technologies you do well to thoroughly research the needs imposed on the new interface. The different technologies solve different types of user needs and it is therefore essential to gain an understanding of what the users need to be able to perform using the new interface.
102

Efficient Transaction Processing in SAP HANA Database: The End of a Column Store Myth

Sikka, Vishal, Färber, Franz, Lehner, Wolfgang, Cha, Sang Kyun, Peh, Thomas, Bornhövd, Christof 11 August 2022 (has links)
The SAP HANA database is the core of SAP's new data management platform. The overall goal of the SAP HANA database is to provide a generic but powerful system for different query scenarios, both transactional and analytical, on the same data representation within a highly scalable execution environment. Within this paper, we highlight the main features that differentiate the SAP HANA database from classical relational database engines. Therefore, we outline the general architecture and design criteria of the SAP HANA in a first step. In a second step, we challenge the common belief that column store data structures are only superior in analytical workloads and not well suited for transactional workloads. We outline the concept of record life cycle management to use different storage formats for the different stages of a record. We not only discuss the general concept but also dive into some of the details of how to efficiently propagate records through their life cycle and moving database entries from write-optimized to read-optimized storage formats. In summary, the paper aims at illustrating how the SAP HANA database is able to efficiently work in analytical as well as transactional workload environments.
103

Data mining with the SAP NetWeaver BI accelerator

Legler, Thomas, Lehner, Wolfgang, Ross, Andrew 03 July 2023 (has links)
The new SAP NetWeaver Business Intelligence accelerator is an engine that supports online analytical processing. It performs aggregation in memory and in query runtime over large volumes of structured data. This paper first briefly describes the accelerator and its main architectural features, and cites test results that indicate its power. Then it describes in detail how the accelerator may be used for data mining. The accelerator can perform data mining in the same large repositories of data and using the same compact index structures that it uses for analytical processing. A first such implementation of data mining is described and the results of a performance evaluation are presented. Association rule mining in a distributed architecture was implemented with a variant of the BUC iceberg cubing algorithm. Test results suggest that useful online mining should be possible with wait times of less than 60 seconds on business data that has not been preprocessed.
104

Повышение эффективности бизнес-процессов предприятия на основе SAP-аналитики : магистерская диссертация / Improving the efficiency of enterprise business processes based on SAP analytics

Мустафаева, А. Э., Mustafaeva, A. E. January 2020 (has links)
Актуальность темы обусловлена необходимостью компаний в условиях конкуренции быстро распознать изменения динамики рынка и своевременно отреагировать на них с целью увеличения прибыли. Научная новизна исследования заключается в поиске путей повышения эффективности бизнес-процессов предприятия на основе SAP-аналитики для туристического бизнеса. Практическая значимость исследования заключается в практическом применении предлагаемого способа повышения эффективности бизнес-процессов в туристическом агентстве и получения экономической выгоды от результата внедрения информационной системы. / The relevance of the topic is due to the need for companies in a competitive environment to quickly recognize changes in market dynamics and timely respond to them in order to increase profits. The scientific novelty of the research lies in finding ways to improve the efficiency of business processes of an enterprise based on SAP analytics for the tourism business. The practical significance of the study lies in the practical application of the proposed method for increasing the efficiency of business processes in a travel agency and obtaining economic benefits from the result of the implementation of the information system.
105

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 15 August 2009 (has links) (PDF)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
106

Bezpečná implementace technologie blockchain / Secure Implementation of Blockchain Technology

Kovář, Adam January 2020 (has links)
This thesis describes basis of blockchain technology implementation for SAP Cloud platform with emphasis to security and safety of critical data which are stored in blockchain. This diploma thesis implements letter of credit to see and control business process administration. It also compares all the possible technology modification. Thesis describes all elementary parts of software which are necessary to implement while storing data and secure integrity. This thesis also leverages ideal configuration of each programable block in implementation. Alternative configurations of possible solutions are described with pros and cons as well. Another part of diploma thesis is actual working implementation as a proof of concept to cover letter of credit. All parts of code are design to be stand alone to provide working concept for possible implementation and can source as a help to write productive code. User using this concept will be able to see whole process and create new statutes for whole letter of credit business process.
107

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 22 June 2009 (has links)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
108

Identificação de requisitos básicos de sistemas de medição de desempenho e avaliações de casos de um sistema computacional de suporte / Performance measurement systems basic requirements identification and cases assessment of a computer-based support system

Esposto, Kleber Francisco 30 October 2003 (has links)
Apresenta um levantamento abrangente de novas considerações sobre Sistemas de Medição de Desempenho (SMD) e o novo panorama ambiental que envolve as empresas e impacta suas formas de avaliação de desempenho. Compila, a partir desses estudos, os principais requisitos de SMDs em uma tabela e propõe um modelo conceitual para sistema de medição de desempenho. Identifica, também, um sistema computacional para suportar o processo de gestão estratégica de desempenho em empresas. Analisa a satisfação desse sistema computacional estudado em relação aos principais requisitos levantados e compilados. As análises em relação a essa satisfação são feitas segundo a percepção do autor desse trabalho, treinado nesta ferramenta, e de profissionais de empresas que utilizam o sistema avaliado. A percepção desses profissionais é obtida por meio da realização de entrevistas em uma pesquisa de campo, guiadas por um questionário. / It presents a wide literature survey on rising considerations about Performance Measurement Systems (PMS) and the modern environmental which surrounds the companies and impacts their performance evaluation system. It compiles from this survey the main PMS requirements in a table and it suggests a conceptual model for performance measurement system. It identifies, too, a commercial computer-based system in order to support strategic performance measurement management. It also analyzes how the PMS requirements are satisfied by the computer-based system, based on the author perception, who was trained at this tool, and on the perception of customers of this system. The assessment of these customer\'s perceptions were made in site through questionnaire based interviews.
109

Water Use of Four Commonly Planted Landscape Tree Species in a Semi-Arid Suburban Environment

Bunnell, Michael Cameron 01 December 2015 (has links)
Native plant communities and agricultural land are commonly converted to urban areas as cities across the Western United States continue to grow and expand. This expansion is typically accompanied by afforestation where a common goal among communities is to maximize shade tree composition. Planted forests in these regions are commonly composed of introduced tree species native to mesic environments and their ability to persist is dependent on consistent irrigation inputs. Many potential ecosystem services may be derived from planting trees in urban and suburban areas; however, there are also costs associated with extensive afforestation, and shade tree cover may have significant implications on municipal water budgets. In this study I evaluate variation in daily and seasonal water use of regionally common suburban landscape tree species in the Heber Valley (Wasatch County, Utah). I had two primary objectives: (1) to identify and understand the differences in transpiration between landscape tree species in a suburban setting and (2) to assess the sensitivity of sap flux and transpiration to variation in vapor pressure deficit, wind speed, and incoming shortwave radiation. I used Granier's thermal dissipation method to measure the temperature difference (ΔT) between two sap flux probes. The empirical equation developed by Granier was used to convert ΔT into sap flux density (Jo) measurements, which were then scaled to whole-tree transpiration. There were consistent and substantial differences in sap flux between tree species. I found that Picea pungens under irrigated growing conditions, on average, had Jo rates that were 32% greater and whole tree water use (ET) rates that were 550% greater than all other species studied. The findings of Jo may be partially explained by xylem architecture and physiological control over stomatal aperture. However, the rate of water flux in the outermost portion of sapwood does not necessarily determine the magnitude of whole tree transpiration. Rather, ET in this study was largely explained by the combined effects of irrigation, tree size, and sapwood to heartwood ratio.
110

Identification of genes induced in the vascular pathogen Verticillium longisporum by xylem sap metabolites of Brasscia napus using an improved genome-wide quantitative cDNA-AFLP / Identifizierung von Xylemsaft-induzierten Genen im vaskulaeren Pathogen Vertcillium longisporum mittles einer verbesserten cDNA-AFLP Methode fuer transkriptomweite Expressionsstudien

Weiberg, Arne 06 November 2008 (has links)
No description available.

Page generated in 0.0952 seconds