• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1676
  • 332
  • 250
  • 173
  • 127
  • 116
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3357
  • 1658
  • 731
  • 505
  • 439
  • 422
  • 400
  • 337
  • 325
  • 321
  • 317
  • 315
  • 306
  • 265
  • 260
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Improved understanding of aerosol processes using satellite observations of aerosol optical properties

Bulgin, Claire Elizabeth January 2010 (has links)
Atmospheric aerosols are the largest remaining uncertainty in the Earth’s radiative budget and it is important that we improve our knowledge of aerosol processes if we are to understand current radiative forcing and accurately project changes in future climate. Aerosols affect the radiation balance directly through the absorption and scattering of incoming solar radiation and indirectly through the modification of cloud microphysical properties. Understanding aerosol forcing remains challenging due to the short atmospheric residence time of aerosols resulting in large spatial and temporal heterogeneity in aerosol loading and chemical composition. Satellite retrievals are becoming increasingly important to improving our knowledge of aerosol forcing. They provide regular global data at finer spatial and temporal resolution than available through sparse groundbased point measurements or localised aircraft campaigns, but cannot unambiguously determine aerosol speciation, relying heavily on a priori assumptions. In this thesis I use data from two satellite instruments: the Along Track Scanning Radiometer 2 (ATSR-2) and the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) interpreted using the Oxford-RAL Aerosol and Cloud (ORAC) retrieval scheme in three pieces of interrelated work. First I use satellite observations of aerosol optical depth a and cloud particle effective radius re from the ATSR-2 instrument in 1997 to investigate the Twomey indirect effect (IE, -δ ln re /δ ln τa) in regions of continental outflow. I generally find a negative correlation between τa and re with the strongest inverse relationships downwind of Africa. North America and eastern Asian continental outflow exhibits a strong seasonal dependence, as expected. Global values for IE range from 0.10 to 0.16, consistent with theoretical predictions. Downwind of Africa, I find that the IE is unphysically high but robust (r = −0.85) during JJA associated with high aerosol loading, and attribute this tentatively to the Twomey hypothesis accounting only for a limited number of physical properties of aerosols. Second, I test the response of the Oxford-RAL Aerosol and Cloud (ORAC) retrieval algorithm for MSG SEVIRI to changes in the aerosol properties used in the dust aerosol model, using data from the Dust Outflow and Deposition to the Ocean (DODO) flight campaign in August 2006. I find that using the observed DODO free tropospheric aerosol size distribution and refractive index compared with the dust aerosol properties from the Optical Properties of Aerosol and Cloud (OPAC) package, increases simulated top of the atmosphere radiance at 0.55 μm assuming a fixed aerosol optical depth of 0.5, by 10–15%, reaching a maximum difference at low solar zenith angles. This difference is sensitive to changes in AOD, increasing by ~2–4% between AOD of 0.4–0.6. I test the sensitivity of the retrieval to the vertical distribution of the aerosol and find that this is unimportant in determining simulated radiance at 0.55 μm. I also test the ability of the ORAC retrieval when used to produce the GlobAerosol dataset to correctly identify continental aerosol outflow from the African continent and I find that it poorly constrains aerosol speciation. I develop spatially and temporally resolved prior distributions of aerosols to inform the retrieval which incorporates five aerosol models: desert dust, maritime, biomass burning, urban and continental. I use a Saharan Dust Index and the GEOS-Chem chemistry transport model to describe dust and biomass burning aerosol outflow, and compare AOD using my speciation against the GlobAerosol retrieval during January and July 2006. I find AOD discrepancies of 0.2–1 over regions of biomass burning outflow, where AOD from my aerosol speciation and the GlobAerosol speciation can differ by as much as 50 - 70 %. Finally I use satellite observations of aerosol optical depth and cloud fraction from the MSG SEVIRI instrument to investigate the semi-direct effect of Saharan dust aerosol on marine stratocumulus cloud cover over the Atlantic during July 2006. I first use these data to study the spatial autocorrelation of aerosol optical depth and find that it is correlated over a lag of 0.1◦ (approximately 10 km at low latitudes), beyond which it rapidly decorrelates. I find a 15 % higher cloud fraction in regions with high dust loading (AOD > 0.5), compared with scenes with a lower dust loading (AOD < 0.5), which for high dust scenes increases with local static stability. I attribute this tentatively to aerosol solar shielding enhancing longwave cloud top radiative cooling which drives marine stratocumulus convection.
152

Evaluation of cloudiness and snowfall simulated by a semi-spectral and a bulk-parameterization scheme of cloud microphysics for the passage of a Baltic heat cyclone

Raabe, Armin, Mölders, Nicole 23 November 2016 (has links) (PDF)
The differences in the concepts of two different parameterizations of cloud microphysics are analyzed. Simulations alternatively applying these parameterizations are performed for a Baltic heat cyclone event. The results of the simulations are compared to each other as well as to observed distributions of cloudiness and snowfall. The main differences between the simulated distributions result from the assumptions on ice, the ice classes, and size distributions of the cloud and precipitating particles. Both schemes succeeded in predicting the position and the main structure of the main cloud and snowfall fields. Nevertheless, the more convective type parameterization overestimates, while the other one underestimates snowfall. / Die Unterschiede in den Konzepten zweier unterschiedlicher Parametrisierungen der Wolkenmikrophysik werden analysiert. Die Ergebnisse der Simulationen werden miteinander und mit den beobachteten Wolken- und Schneeverteilungen für eine Baltische Wärmezyklone verglichen. Die wesentlichen Unterschiede in den berechneten Verteilungen resultieren aus den verschiedenen Annahmen über Wolkeneis, die Eisklassen und die Größenverteilungen der Wolken- und Niederschlagspartikel. Beide Schemata sagen die Position und die wesentlichen Strukturen der Wolken- und Schneeverteilungen erfolgreich vorher. Dennoch überschätzt das eher konvektive Schema den Schneefall, während das andere ihn unterschätzt.
153

Improved Virtual Machine (VM) based Resource Provisioning in Cloud Computing

Md. Mahfuzur, Rahman 13 October 2016 (has links)
To achieve “provisioning elasticity”, the cloud needs to manage its available resources on demand. A-priori, static, VM provisioning introduces no runtime overhead but fails to handle unanticipated changes in resource demands. Dynamic provisioning addresses this problem but introduces runtime overhead. To avoid sub-optimal provisioning my PhD thesis adopts a hybrid approach that combines static and dynamic provisioning. The idea is to adapt an initial static placement of VMs in response to evolving load characteristics. My work is focused on broadening the applicability of clouds by looking at how the infrastructure can be more effectively used to support historically atypical applications (e.g. those that are interactive in nature with tighter QoS constraints). To accomplish this I have developed a family of related algorithms that collectively improve resource sharing on physical machines to permit load variation to be better addressed and to lessen the probability of VM interference due to resource contention. The family includes three core dynamic provisioning algorithms. The first algorithm provides for the short-term, controlled sharing of resources between co-hosted VMs, the second identifies pairs (and by extrapolation larger groups) of VMs that are predicted to be "compatible" in terms of the resources they need. This allows the cloud provider to do co-location to make the first algorithm more effective. The final, third, algorithm deals with under-utilized physical machines by re-packing the VMs on those machines while also considering their compatibility. This final algorithm both addresses the possibility of the second algorithm creating underutilized machines as a result of pairing and migration and also handles underutilization arising from “holes” left by the termination of short-duration VMs (another form of atypical VM application). I have also created a surprisingly simple static provisioning algorithm that considers compatibility to minimize VM interference that can be used before my dynamic algorithms. My evaluation is primarily simulation-based though I have also implemented the core algorithms on a small test-bed system to ensure correctness. The results obtained from my simulation experiments suggest that hybrid static and dynamic provisioning approaches are both feasible and should be effective supporting a broad range of applications in cloud environments. / February 2017
154

Privacy Protection on Cloud Computing

Li, Min 01 January 2015 (has links)
Cloud is becoming the most popular computing infrastructure because it can attract more and more traditional companies due to flexibility and cost-effectiveness. However, privacy concern is the major issue that prevents users from deploying on public clouds. My research focuses on protecting user's privacy in cloud computing. I will present a hardware-based and a migration-based approach to protect user's privacy. The root cause of the privacy problem is current cloud privilege design gives too much power to cloud providers. Once the control virtual machine (installed by cloud providers) is compromised, external adversaries will breach users’ privacy. Malicious cloud administrators are also possible to disclose user’s privacy by abusing the privilege of cloud providers. Thus, I develop two cloud architectures – MyCloud and MyCloud SEP to protect user’s privacy based on hardware virtualization technology. I eliminate the privilege of cloud providers by moving the control virtual machine (control VM) to the processor’s non-root mode and only keep the privacy protection and performance crucial components in the Trust Computing Base (TCB). In addition, the new cloud platform can provide rich functionalities on resource management and allocation without greatly increasing the TCB size. Besides the attacks to control VM, many external adversaries will compromise one guest VM or directly install a malicious guest VM, then target other legitimate guest VMs based on the connections. Thus, collocating with vulnerable virtual machines, or ”bad neighbors” on the same physical server introduces additional security risks. I develop a migration-based scenario that quantifies the security risk of each VM and generates virtual machine placement to minimize the security risks considering the connections among virtual machines. According to the experiment, our approach can improve the survivability of most VMs.
155

Gamma rays, cosmic rays and local molecular clouds

Richardson, K. M. January 1988 (has links)
No description available.
156

Pointwise and Instance Segmentation for 3D Point Cloud

Gujar, Sanket 11 April 2019 (has links)
The camera is the cheapest and computationally real-time option for detecting or segmenting the environment for an autonomous vehicle, but it does not provide the depth information and is undoubtedly not reliable during the night, bad weather, and tunnel flash outs. The risk of an accident gets higher for autonomous cars when driven by a camera in such situations. The industry has been relying on LiDAR for the past decade to solve this problem and focus on depth information of the environment, but LiDAR also has its shortcoming. The industry methods commonly use projections methods to create a projection image and run detection and localization network for inference, but LiDAR sees obscurants in bad weather and is sensitive enough to detect snow, making it difficult for robustness in projection based methods. We propose a novel pointwise and Instance segmentation deep learning architecture for the point clouds focused on self-driving application. The model is only dependent on LiDAR data making it light invariant and overcoming the shortcoming of the camera in the perception stack. The pipeline takes advantage of both global and local/edge features of points in points clouds to generate high-level feature. We also propose Pointer-Capsnet which is an extension of CapsNet for small 3D point clouds.
157

Um estudo sobre a adoção da Computação em Nuvem no Brasil / A study on Cloud Computing adoption in Brazil

Ramalho, Neilson Carlos Leite 18 December 2012 (has links)
A Computação em Nuvem (CN) é um dos temas mais abordados por profissionais de TI atualmente. Com um forte apelo econômico, a CN torna possível a ideia da computação como uma utilidade, na qual recursos computacionais (processamento e armazenamento, por exemplo) podem ser consumidos e pagos com a mesma conveniência que a energia elétrica. Com este novo paradigma, uma empresa, ao iniciar suas atividades, não necessita mais investir antecipadamente um alto capital em equipamentos de TI. Os recursos computacionais são adquiridos conforme são necessários e o consumidor paga apenas pelo que utiliza. Esta pesquisa traz uma contribuição para as organizações e para o meio acadêmico, uma vez que analisa a adoção da CN por empresas brasileiras. A pesquisa abrange empresas privadas de diversos portes e setores que tenham adotado pelo menos um serviço de TI no modelo de CN. O modelo de pesquisa foi elaborado com base nos objetivos específicos, os quais se originaram das lacunas existentes sobre o uso de serviços de CN no Brasil. A pesquisa utilizou uma amostra não probabilística contendo 96 casos, os quais contemplaram aspectos do respondente, da organização e do serviço de CN mais importante para a organização. Os dados foram coletados por meio de um questionário e tratados estatisticamente usando técnicas não paramétricas e Análise de Agrupamentos. A pesquisa se caracteriza por ser exploratória, verificando frequências e ligações entre as características organizacionais e os serviços de CN. A pesquisa identificou as características dos serviços de CN utilizados no Brasil e o grau de aderência de cada serviço à definição de CN proposta. Adicionalmente, foram apresentadas as relações entre as características organizacionais e as características dos serviços de CN. Além disso, foi possível identificar três grupos distintos de empresas, em relação às características dos serviços de CN utilizados, e descrever às características organizacionais e dos serviços associados a cada grupo. Finalmente, a CN foi discutida à luz das Teorias de Terceirização. / Cloud Computing (CC) is one of the most discussed topics among IT professionals today. With a strong economic appeal, CC makes possible the idea of computing as a utility, in which computing resources (processing power and storage, for example) can be consumed and paid with the same convenience as electricity. In this new paradigm, a startup company does not need up-front capital to invest in advanced IT assets. The computing resources are acquired as needed and the customer pays only for what is used. This research makes an important contribution to organizations and academia, since it analyzes CC adoption by Brazilian companies. The survey covers private companies of all sizes and sectors that have adopted at least one IT service in CC model. The research model was designed based on the research specific objectives, which were made from the gaps of CC services usage in Brazil. This paper used a not randomly picked sample with 96 cases, which specified aspects of the respondent, organization and the most important CC service to the organization. Data was collected through a questionnaire and statistically analyzed using nonparametric techniques and Cluster Analysis. This research is characterized as exploratory, checking frequencies and links between organizational characteristics and CC services characteristics. This research identified the characteristics of CC services used in Brazil and the compliance degree of each service to the proposed CC definition. Additionally, we presented and discussed the relationships between organizational characteristics and characteristics of CC services. Furthermore, it was possible to identify three distinct groups of companies in relation to the characteristics of CC services they use, and describe organizational characteristics and services associated with each group. Finally, CC was discussed in the light of the Outsourcing Theories.
158

Modeling and exploiting QoS prediction in cloud and service computing. / 計算和服務計算的質量預測建模和應用 / Modeling and exploiting quality of service prediction in cloud and service computing / CUHK electronic theses & dissertations collection / Ji suan he fu wu ji suan de zhi liang yu ce jian mo he ying yong

January 2013 (has links)
Zhang, Yilei. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 160-174). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
159

Investigating data standardisation and modelling challenges to enable advanced power systems analysis

Shand, Corinne Margaret January 2018 (has links)
As the power industry moves towards more active distribution networks there is an increased requirement for greater analysis and observability of the current state of the network. There are a number of challenges for utilities in realising this including the quality and accuracy of their network models; the lack of integration between network models and the large quantities of sensor data being collected; the security and communication challenges posed when installing large numbers of sophisticated sensors across distribution networks; and the exponential increase in computing power required to fully analyse modern network configurations. This thesis will look at these challenges and how cloud computing can be used to provide novel solutions by providing secure platforms on which to deploy complex data collection and network analysis applications. One of the main research contributions is the use of remote data collection from Micro Phasor Measurement Units (μPMUs), which collect synchronised information about the state of the distribution network. Impedance equations are applied to network data recorded from μPMUs and the results are compared to network models. This identifies areas of the distribution network as requiring resurveying or upgrading, potentially impacting planning for installation of generation or load. Triggers can be used to reduce the bandwidth of data being sent by a μPMU; these were tested with real world data to highlight how a combination of local intelligence and cloud-based analysis can be used to reduce bandwidth requirements while supporting the use of detailed measurement data for cloud-based analysis in a fault detection system. Power flow analysis is an important tool for both operations and planning engineers, and as computing power has increased the time required to run individual power flow analysis cases has decreased rapidly. However there has also been a corresponding increase in the complexity of the data as utilities seek to model and analyse distributed energy resources attached on the medium and low voltage networks. This has made network models more complex, exponentially increasing the number of contingencies that need to be analysed in an emergency situation. Another main research contribution is a demonstration of the challenges faced when using a commercial cloud platform to inexpensively solve computationally intensive power flow problems and the time, costs and feasibility of performing N-1 and N-2 analysis on a 21,000-bus network. It includes a full analysis and comparison of execution times and costs for different commercial cloud system configurations as well as the extrapolated costs required to run a full N-2 analysis of over 420 million contingencies in under 10 minutes. This includes a demonstration of a cloud client and server application developed as part of this research that leverages a commercial power flow engine. Finally, this thesis will summarise how each of these research outputs can be combined to provide utilities with a commercial, open, standards-based cloud platform for continuous, automated contingency analysis using real-time sensor data based on current network conditions. This would better inform control engineers about areas of vulnerability and help them identify and counter these in real-time.
160

Budoucnost outsourcingu IT v době SaaS / The future of IT outsourcing in the age of SaaS

Šapovalov, Tomáš January 2011 (has links)
Information technologies develop very quickly. And in the same way should be changed the companie's attitude to them. But the question is whether this is actually happening. If the companies and their employees are able to adapt to new technologies as quickly as these technologies arise. One of the phenomena of recent years is delivering applications through Software As A Service (SaaS). This concept is closely linked with the issue of Cloud computing and its use can be solved by outsourcing corporate IT structure. So here we meet three very exposed concepts. But their terms are often used confused or in the wrong context. This work focuses on explaining these concepts in the context of business intelligence and analysis on the aspects that affect them. In particular, cloud computing is a modern concept, which everyone talks about but only few people know exactly what it means and what it could bring to a particular company. This work helps to answer these questions and it helps to sort out the context and framework within which the possible changes in the corporate IT infrastructure should be discussed to make decisions. The first part is focused on basic lighting of these concepts and their insertion into mutual context. The second part is then devoted to current trends in this area so that conclusions for a prediction of the expected development in the coming years can be drawn.

Page generated in 0.0372 seconds