• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1138
  • 359
  • 132
  • 124
  • 119
  • 117
  • 43
  • 27
  • 24
  • 24
  • 19
  • 17
  • 12
  • 8
  • 7
  • Tagged with
  • 2579
  • 510
  • 480
  • 477
  • 449
  • 346
  • 289
  • 275
  • 264
  • 252
  • 239
  • 218
  • 214
  • 201
  • 175
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Gridens svar på överlevnad : -en studie om revisorers beaktning av fortsatt drift

Wahlström, Jim, Akl, Charlene January 2011 (has links)
An auditors’ job is to review the company's figures and, as an independent part, give an accurate picture of its financial situation. Auditors have to relate to ISA where ISA 570 can be found and which deals with the going concern. The standard addresses a couple of factors that may be indications that a company can have problems with their continued operation. The problem is that ISA does not evaluate the events, which is more significant than others in the assessment, but it is up to the auditor to consider. The purpose of this paper is to describe the factors, which the auditor believes is more important than others in assessing the going concern and explain why it is so. In order to solve the purpose we used both a quantitative and qualitative method. The quantitative method was based on a grid model called The Reperatory Grid and the qualitative method consisted of interview questions. In order to obtain data we interviewed three certified public accountants. In our results and conclusion, we concluded that it is difficult to determine whether an event is more important than anyone else. The explanation is that the objects that the auditor takes into account most often is situation-specific and therefore require the auditor to use out of their previous knowledge of the company, but also create a comprehensive picture of the specific situation with the help of various dimensions. / En revisors uppgift är att granska företagets siffror och, som oberoende part, ge en tillförlitlig bild av företagets ekonomiska ställning. Revisorn måste vid granskningen förhålla sig till standarden ISA. I ISA finns standarden ISA 570 som behandlar fortsatt drift (going concern). Standarden tar upp ett par faktorer som kan vara indikationer på att ett företag kan få problem med sin fortsatta drift. Problematiken är att ISA inte värderar vilka faktorer som är mer betydelsefulla än andra vid bedömningen, utan det är upp till revisorn att ta ställning. Syftet med denna uppsats är att beskriva vilka faktorer som revisorn anser är mer betydelsefulla än andra vid en bedömning av going concern samt förklara orsaken till varför det är så. För att ta reda på syftet användes både en kvantitativ och kvalitativ metod. Den kvantitativa metoden utgick från en gridmodell som kallas The Reperatory Grid och den kvalitativa metoden bestod av intervjufrågor. För att få fram data intervjuades tre auktoriserade revisorer. I vårt resultat och slutsats kom vi fram till att det är svårt att avgöra om någon händelse är mer betydelsefull än någon annan. Förklaringen till detta är att de faktorer som revisorn beaktar oftast är situationsspecifik och därför behöver revisorn använda sig utav sin tidigare kunskap om företaget, men även skapa sig en helhetsbild över den specifika situationen med hjälp av olika mått.
662

Experimental and Theoretical Assessment of PBGA Reliability in Conjunction with Field-Use Conditions

Tunga, Krishna Rajaram 09 April 2004 (has links)
With the dramatic advances that have taken place in microelectronics over the past three decades, ball-grid array (BGA) packages are increasingly being used in microsystems applications. BGA packages with area-array configuration have several advantages: smaller footprint, faster signal transmission, testability, reworkability, handling easiness, etc. Although ceramic ball grid array (CBGA) packages have been used extensively in the microsystems industry, the use of plastic ball grid array (PBGA) packages is relatively new, especially for automotive and aerospace applications where harsh thermal conditions prevail. This thesis work has developed an experimental and a theoretical modeling program to study the reliability of two PBGA packages. The physics-based theoretical models take into consideration the time-dependent creep behavior through power law creep and time-independent plastic behavior through multi-linear kinematic hardening. In addition, unified viscoplastic constitutive models are also taken into consideration. The models employ two damage-metrics, namely inelastic strain and inelastic strain energy density, to predict the solder joint fatigue life. The theoretical predictions have been validated through air-to-air in-house thermal cycling tests carried out between 55 and #61616;C and 125 and #61616;C. In addition, laser-moir interferometry has been used to determine the displacement contours in a cross-section of the package at various temperatures. These contours measured through moir interferometry have also been used to validate the thermally-induced displacement contours, predicted by the models. Excellent agreement is seen between the experimental data and the theoretical predictions. In addition to life prediction, the models have been extended to map the field-use conditions with the accelerated thermal cycling conditions. Both linear and non-linear mapping techniques have been developed employing inelastic strain and strain energy density as the damage metric. It is shown through this research that the symmetric MIL-STD accelerated thermal cycles, currently in practice in industry, have to be modified to account for the higher percentage of creep deformation experienced by the solder joints in the field-use conditions. Design guidelines have been developed for such modifications in the accelerated thermal cycles.
663

Development Of A Two-dimensional Navier-stokes Solver For Laminar Flows Using Cartesian Grids

Sahin, Serkan Mehmet 01 March 2011 (has links) (PDF)
A fully automated Cartesian/Quad grid generator and laminar flow solver have been developed for external flows by using C++. After defining the input geometry by nodal points, adaptively refined Cartesian grids are generated automatically. Quadtree data structure is used in order to connect the Cartesian cells to each other. In order to simulate viscous flows, body-fitted quad cells can be generated optionally. Connectivity is provided by cut and split cells such that the intersection points of Cartesian cells are used as the corners of quads at the outmost row. Geometry based adaptation methods for cut, split cells and highly curved regions are applied to the uniform mesh generated around the geometry. After obtaining a sufficient resolution in the domain, the solution is achieved with cellcentered approach by using multistage time stepping scheme. Solution based grid adaptations are carried out during the execution of the program in order to refine the regions with high gradients and obtain sufficient resolution in these regions. Moreover, multigrid technique is implemented to accelerate the convergence time significantly. Some tests are performed in order to verify and validate the accuracy and efficiency of the code for inviscid and laminar flows.
664

Nomadic migration : a service environment for autonomic computing on the Grid

Lanfermann, Gerd January 2002 (has links)
In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. <br /> Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen.<br /> Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung.<br /> Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen. / In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. <br /> <br /> A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. <br /> <br /> The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment.
665

Power Grid Analysis In VLSI Designs

Shah, Kalpesh 03 1900 (has links)
Power has become an important design closure parameter in today’s ultra low submicron digital designs. The impact of the increase in power is multi-discipline to researchers ranging from power supply design, power converters or voltage regulators design, system, board and package thermal analysis, power grid design and signal integrity analysis to minimizing power itself. This work focuses on challenges arising due to increase in power to power grid design and analysis. Challenges arising due to lower geometries and higher power are very well researched topics and there is still lot of scope to continue work. Traditionally, designs go through average IR drop analysis. Average IR drop analysis is highly dependent on current dissipation estimation. This work proposes a vector less probabilistic toggle estimation which is extension of one of the approaches proposed in literature. We have further used toggles computed using this approach to estimate power of ISCAS89 benchmark circuits. This provides insight into quality of toggles being generated. Power Estimation work is further extended to comprehend with various state of the art methodologies available i.e. spice based power estimation, logic simulation based power estimation, commercially available tool comparisons etc. We finally arrived at optimum flow recommendation which can be used as per design need and schedule. Today’s design complexity – high frequencies, high logic densities and multiple level clock and power gating - has forced design community to look beyond average IR drop. High rate of switching activities induce power supply fluctuations to cells in design which is known as instantaneous IR drop. However, there is no good analysis methodology in place to analyze this phenomenon. Ad hoc decoupling planning and on chip intrinsic decoupling capacitance helps to contain this noise but there is no guarantee. This work also applies average toggle computation approach to compute instantaneous IR drop analysis for designs. Instantaneous IR drop is also known as dynamic IR drop or power supply noise. We are proposing cell characterization methodology for standard cells. This data is used to build power grid model of the design. Finally, the power network is solved to compute instantaneous IR drop. Leakage Power Minimization has forced design teams to do complex power gating – multilevel MTCMOS usage in Power Grid. This puts additonal analysis challenge for Power Grid in terms of ON/OFF sequencing and noise injection due to it. This work explains the state of art here and highlights some of the issues and trade offs using MTCMOS logic. It further suggests a simple approach to quickly access the impact of MTCMOS gates in Power Grid in terms of peak currents and IR drop. Alternatively, the approach suggested also helps in MTCMOS gate optimization. Early leakage optimization overhead can be computed using this approach.
666

Performance Modeling Based Scheduling And Rescheduling Of Parallel Applications On Computational Grids

Sanjay, H A 10 1900 (has links)
As computational grids have become popular and ubiquitous, users have access to large number and different types of geographically distributed grid resources. Many computational grid frameworks are composed of multiple distributed sites with each site consisting of one or more dedicated or non-dedicated clusters. Jobs submitted to a grid are handled by a matascheduler which interacts with the local schedulers of the clusters for scheduling jobs to the individual clusters. Computational grids have been found to be powerful research-beds for execution of various kinds of parallel applications. When a parallel application is submitted to a grid, the metascheduler has to choose a set of resources from a cluster for application execution. To select the best set of resources for application execution, it is important to determine the performance of the application. Accurate performance estimates of an application is essential in assisting a grid meta scheduler to efficiently schedule user jobs. Thus models that predict execution times of parallel applications on a set of resources and a search procedure (scheduling strategy) which selects the best set of machines within a cluster for application execution are of importance for enabling the parallel applications on grids. For efficient execution of large scientific parallel applications consisting of multiple phases, performance models of the individual phases should be obtained. Efficient rescheduling strategies that can use the per-phase models to adapt the parallel applications to application and resource dynamics are necessary for maintaining high performance of the applications on grids. A practical and robust grid computing infrastructure that integrates components related to application and resource monitoring, performance modeling, scheduling and rescheduling techniques, is highly essential for large-scale deployment and high performance of scientific applications on grid systems and hence for fostering high performance computing. This thesis focuses on developing performance models for predicting execution times of parallel problems/subproblems on dedicated and non-dedicated grid resources. The thesis also constructs robust scheduling and rescheduling strategies in a grid metascheduler that can use the performance models for efficient execution of large scientific parallel applications on dynamic grids. Finally, the thesis builds a practical and robust grid middleware infrastructure which integrates components related to performance modeling, scheduling and rescheduling, monitoring and migration frameworks for large-scale deployment and use of high performance applications on grids. The thesis consists of four main components. In the first part of the thesis, we have developed a comprehensive set of performance modeling strategies to predict the execution times of tightly-coupled parallel applications on a set of resources in a dedicated or non-dedicated cluster. The main purpose of our prediction strategies is to aid grid metaschedulers in making scheduling decisions. Our performance modeling strategies, based on linear regression, can deal with non-dedicated systems where the loads can change during application executions. Our models do not require detailed knowledge and instrumentation of the applications and can be constructed without the involvement of application developers. The strategies are intended for rapid and large scale deployment of parallel applications on non-dedicated grid systems. We have evaluated our strategies on 8, 16, 24 and 32-node clusters with random loads and load traces from a grid system. Our performance modeling strategies gave less than 30% average percentage prediction errors in all cases, which is reasonable for non-dedicated systems. We also found that scheduling based on the predictions by our strategies will result in perfect scheduling in many cases. For modeling large-scale scientific applications, we use execution profiles and automatic program analysis, and manual analysis of significant portions of the application’s code to identify the different phases of applications. We then adopt our performance modeling strategies to predict execution times for the different phases of the tightly-coupled parallel applications on a set of resources in a dedicated or non-dedicated cluster. Our experiments show that using combinations of performance models of the phases give 18% – 70% more accurate predictions than using single performance models for the applications. In the second part of the thesis, we have devised, evaluated and compared algorithms for scheduling tightly-coupled parallel applications on multi-cluster grids. Our algorithms use performance models that predict the execution times of parallel applications, for evaluations of candidate schedules. In this work, we propose a novel algorithm called Box Elimination (BE) that searches a space of performance model parameters to determine efficient schedules. By eliminating large search space regions containing poorer solutions at each step and searching high quality solutions, our algorithm is able to generate efficient schedules within few seconds for even clusters of 512 processors. By means of large number of real and simulation experiment, we compared our algorithm with popular optimization techniques. We show that our algorithm generates up to 80% more efficient schedules than other algorithms and the resulting execution times are more robust against performance modeling errors. The third part of the thesis deals with policies for rescheduling long-running multi-phase parallel applications in response to application and resource dynamics. In this work, we use our performance modeling and scheduling strategies to derive rescheduling plans for executing multi-phase parallel applications on grids. A rescheduling plan consists of potential points in application execution for rescheduling and schedules of resources for application execution between two consecutive rescheduling points. We have developed three algorithms, namely an incremental algorithm, a divide-and-conquer algorithm and a genetic algorithm, for deriving a rescheduling plan for a parallel application execution. We have also developed an algorithm that uses rescheduling plans derived on different clusters to form a single coherent rescheduling plan for application execution on a grid consisting of multiple clusters. The rescheduling plans generated by our algorithms are highly efficient leading to application execution times that are higher than the execution times corresponding to brute force method by less than 10%. We also find that rescheduling in response to changing application and resource dynamics, using the rescheduling plans for multi-cluster grids generated by our algorithms, give much lesser execution times when compared to executions of the applications on a single schedule throughout application execution. In the final part of the thesis, we have developed a practical grid middleware framework called MerITA (Middleware for Performance Improvement of Tightly Coupled Parallel Applications on Grids), a system for effective execution of tightly-coupled parallel applications on multi-cluster grids consisting of dedicated or non-dedicated, interactive or batch systems. The framework brings together performance modeling for automatically determining the characteristics of parallel applications, scheduling strategies that use the performance models for efficient mapping of applications to resources, rescheduling policies for determining the points in application execution when executing applications can be rescheduled to different sets of resources to obtain performance improvement and a check-pointing library for enabling rescheduling.
667

Ανακάλυψη web services σε συστήματα ομοτίμων (Peer- to- Peer)

Καλαποδάς, Γεώργιος 18 September 2007 (has links)
Τόσο τα Web Services (WS) όσο και τα P2P Συστήματα, δύο ακμάζουσες μορφές της πληροφορικής και του Διαδικτύου, έχουν τύχει ευρείας αποδοχής στις μέρες μας. Τα Napster, Gnutella και LimeWire, αποτελούν λίγες μόνο από τις υλοποιήσεις της τεχνολογίας των P2P Συστημάτων. Από την άλλη τα WS ακμάζουν στον χώρο των επιχειρήσεων και του επιχειρηματικού σχεδιασμού με κατεύθυνση το Internet. Τα e-Bay, Amazon και Microsoft είναι μόνο λίγες από τις εταιρείες που κάνουν ευρεία χρήση των τεχνολογιών των WS. Μέχρι σήμερα οι δύο αυτές τεχνολογίες αναπτύσσονται σχεδόν ξεχωριστά η μία από την άλλη. Το κίνητρο το οποίο ενέπνευσε αυτή την εργασία, ήταν η ύπαρξη του UDDI κεντρικού καταλόγου για την αναζήτηση και εκτέλεση των WS. Ο τρόπος υλοποίησής και λειτουργίας του UDDI καταλόγου, επιφέρει ένα σημαντικό μειονέκτημα, την κεντρικοποίηση του ίδιου του UDDI καταλόγου και την πλήρη απουσία κατανομής του. Τοποθετώντας, όλη την πληροφορία σε ένα σημείο, πολύ γρήγορα μπορούν να εμφανιστούν προβλήματα bottleneck και επομένως κατακόρυφης πτώσης της απόδοσης ενός συστήματος. Βασικός στόχος επομένως, είναι πώς θα μπορούσε ο UDDI κατάλογος να κατανεμηθεί ώστε να βελτιωθούν οι χρόνοι ανακάλυψης και εκτέλεσης των WS που βρίσκονται σε αυτόν. Στην προσπάθεια αυτή συγκαταλέγεται η λειτουργικότητα και η δομή των P2P Συστημάτων, ως κατανεμημένων δομών. Στην παρούσα Μεταπτυχιακή εργασία έγινε μια προσπάθεια συγκερασμού των τεχνολογιών των P2P Συστημάτων και των WS, ώστε εξάγοντας τα πλεονεκτήματα από το καθένα, κατασκευάστηκε ένα P2P Σύστημα το οποίο μπορεί να κατανέμει WS και να υποστηρίζει αποδοτικότερη και ταχύτερη ανακάλυψη απ’ ότι στον UDDI κατάλογο. Στην προσπάθεια αυτή ενσωματώθηκαν περαιτέρω, ορισμένες παράμετροι ποιότητας υπηρεσίας των Web Services (Quality of Web Services - QoWS). Βάση αυτών δίνεται η δυνατότητα αποδοτικότερης αναζήτησης WS και η εξαγωγή αμεσότερων αποτελεσμάτων. Οι τεχνολογίες που εξυπηρέτησαν ως υπόβαθρο ανάπτυξης είναι το P-Grid, P2P πρωτόκολλο επικοινωνίας, καθώς και το Gridella, η GUI υλοποίηση αυτού, για την ανακάλυψη αρχείων σε ένα P-Grid P2P δίκτυο. Εν τέλη, εξάγονται συμπεράσματα μέσα από πειράματα και μετρήσεις στην λειτουργία της πρότυπης εφαρμογής, που ονομάστηκε WS-Grid. / -
668

Lietuvos gyventojų socialinių pokyčių raiška kartografinėmis anamorfozėmis / Expression of social changes of residents of lithuania by cartographic anamorphosis

Kauneckaitė, Lina 08 September 2009 (has links)
Be tradicinių kartografinio vaizdavimo metodų sukurtų euklidinės geometrijos pagrindu yra kartografinių modelių, kurių sudarymo principas – ne euklidinė metrika. Tokie modeliai yra kartoidai, minčių žemėlapiai, kartografinės anamorfozės. Kartografinės anamorfozės – tai išvestiniai tradicinių žemėlapių grafinis vaizdas, kuriame reiškinio kartografinio vaizdo deformacija priklauso nuo nagrinėjamo reiškinio reikšmių, pamatinio žemėlapio bei pasirinkto algoritmo tipo. Anamorfozės būna dviejų būdų: reguliaraus ir laisvo tinklo. Šio baigiamojo magistro darbo tiklas susipažinti su kartografinių anamorfozių sudarymo teorinėmis nuostatomis ir remiantis jomis sudaryti nereguliaraus tinklo Lietuvos gyventojų tarptautinės emigracijos kartografinę anamorfozę. Kartografinės anamorfozės labiausiai tinka socialiniams-ekonominiams reiškiniams vaizduoti, nes anamorfozės, kartografinės raiškos požiūriu geriau perteikia statistinę informaciją, ir pagerina komunikacinę kokybę. Kartografinių anamorfozių didžiausias trūkumas – paprastų algoritmų joms sudaryti trukūmas. Tačiau nepaisant to, šis kartografinio vaizdavimo metodas yra vienas iš perspektyvių kartografijos plėtros sryčių, todėl labai svarbu, kad Lietuvoje būtų pradėtas tirti šis kartografinio vaizdavimo būdas. / Besides the traditional methods of the cartographical representation, created by the laws of Euclidean geometry, there are images, which have in their basis principles of non-Euclidean metric: cartoids, "mental" maps and cartographical anamorphosises. Cartographical anamorphoses - the graphic representations, derivatives of traditional maps, which scale is transformed, depending on the value of the characteristic of the phenomena on an initial map also on algorithm. Anamorphosise can be classified into: regular and irregular grid. The purpose of this master degree work is to get more information about creation of cartographical anamorphosises and going by that to make irregular grid anamorphosis of international emigration in Lithuania. Cartographical anamorphosises are the best to use for socio-economical analysis, because it show better statistic information and it have better quality of communication. The biggest disadvantage of cartographical anamorphoses is the defect of simple algorithms to create them. In spite of that, this cartographical method of visualization is one of most promising field of cartography that is why is very important to start analyses this cartographical method of visualization in Lithuania.
669

An Economic Analysis of Grid-tie Residential Photovoltaic System and ?Oil Barrel Price Forecasting: A Case Study of Saudi Arabia

Mutwali, Bandar 08 January 2013 (has links)
The demand for electricity is increasing daily due to technological advancement, and ?luxurious lifestyles. Increasing utilization of electricity means the depletion of fossil fuel ?reserves. Thus, governments around the world are seeking alternative and sustainable ?sources of energy such as the solar powered system. The main purpose of this research is ?to develop a knowledge base on residential electric generation from the grid and solar ?energy. This paper examined the economic feasibility of using grid-tied residential ?photovoltaic (GRPV) system in Saudi Arabia with the HOMER software. Models ?forecasting the price of oil barrels through artificial neural networks (ANN) were also ?employed in the analysis. The study shows that an oil-rich country like Saudi Arabia has ?potential to utilize the GRPV system as an alternative source of energy. / This paper examined the economic feasibility of using grid-tied residential photovoltaic ??(GRPV) system in Saudi Arabia with the HOMER software. Models forecasting the ?price of oil barrels through artificial neural networks (ANN) were also employed in the ?analysis. The study shows that an oil-rich country like Saudi Arabia has potential to ?utilize the GRPV system as an alternative source of energy. This study provides a ?discussion of the potential for applying solar-powered and an assessment of the ?performance of existing systems based on collecting output data.?
670

Vielfalt entfalten - Musikhören und Musikdenken in Netzen : die Psychologie der persönlichen Konstrukte und das Repertory Grid von George A. Kelly: Theorie und Anwendung in Musikwissenschaft und Musikpsychologie

Ohme, Ute January 2008 (has links)
Zugl.: Berlin, Humboldt-Univ., Diss., 2007

Page generated in 0.1063 seconds