• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 28
  • 15
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 114
  • 49
  • 47
  • 37
  • 34
  • 30
  • 26
  • 21
  • 21
  • 21
  • 21
  • 19
  • 18
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing

Mani, Sindhu 01 January 2012 (has links)
High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms. We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them.
112

Modelo tecnológico de análisis predictivo basado en machine learning para evaluación de riesgo crediticio

Ortiz Huamán, Cesar Humberto, Haro Bernal, Brenda Ximena 15 July 2017 (has links)
El incremento de herramientas e innovación en tecnología para la sociedad trae como resultado que las organizaciones empiecen a producir y almacenar grandes cantidades de datos. Así, la gestión y la obtención de conocimiento a partir de estos datos es un desafío y clave para generar ventaja competitiva. Dentro del proyecto dos enfoques son tomados en cuenta; la complejidad de implementación, los costos asociados por el uso de tecnologías y herramienta necesarias. Para encontrar los secretos que esconden los datos recolectados, es necesario tener una gran cantidad de ellos y examinarlos de forma minuciosa para así encontrar patrones. Este tipo de análisis es de complejidad alta para que nosotros mismos logremos detectar (Chappell & Associates, 2015). Campos de Ciencias de la Computación como Machine Learning servirán de base para la realización del análisis predictivo que permita anticiparnos al comportamiento futuro de las variables definidas según el problema que identifiquemos. El presente proyecto tiene como principio la necesidad de tener un Modelo Tecnológico de análisis predictivo basado en Machine Learning en la evaluación de riesgo crediticio. Fue tomada en consideración la situación actual sobre las diferentes implementaciones y arquitecturas que fueron desarrolladas por empresas que cuentan soluciones predefinidas o con propuestas generales que no permiten la flexibilidad y detalle de que necesita tener un sistema con la tecnología de Machine Learning. / Increasing tools and technology innovation for society results in organizations starting to produce and store large amounts of data. Thus, managing and obtaining knowledge from this data is a challenge and key to generating competitive advantage. Within this project two approaches are taken into account; The complexity of implementation and the costs associated with the use of necessary technologies and tools. To find the secrets that hide the collected data, it is necessary to have a large number of them and to examine them in order to find patterns. This type of analysis is highly complex so that we can detect it ourselves (Chappell & Associates, 2015). Fields of Computer Science as Machine Learning will serve as basis for the realization of the predictive analysis that allows us to anticipate the future behavior of the variables defined according to the problem that we identify. The present project has as principle the need to have a process model of predictive analysis based on machine learning for the evaluation of credit risk. It was taken into consideration the current situation regarding the different implementations and architectures that were developed by companies that have predefined solutions or with general proposals that do not allow the flexibility and detail that you need to have a system for the use of Machine Learning technology. / Tesis
113

A Qualitative Comparative Analysis of Data Breaches at Companies with Air-Gap Cloud Security and Multi-Cloud Environments

T Richard Stroupe Jr. (17420145) 20 November 2023 (has links)
<p dir="ltr">The purpose of this qualitative case study was to describe how multi-cloud and cloud-based air gapped system security breaches occurred, how organizations responded, the kinds of data that were breached, and what security measures were implemented after the breach to prevent and repel future attacks. Qualitative research methods and secondary survey data were combined to answer the research questions. Due to the limited information available on successful unauthorized breaches to multi-cloud and cloud-based air gapped systems and corresponding data, the study was focused on the discovery of variables from several trustworthily sources of secondary data, including breach reports, press releases, public interviews, and news articles from the last five years and qualitative survey data. The sample included highly trained cloud professionals with air-gapped cloud experience from Amazon Web Services, Microsoft, Google and Oracle. The study utilized unstructured interviews with open-ended questions and observations to record and document data and analyze results.</p><p dir="ltr">By describing instances of multi-cloud and cloud-based air gapped system breaches in the last five years this study could add to the body of literature related to best practices for securing cloud-based data, preventing data breach on such systems, and for recovering from breach once it has occurred. This study would have significance to companies aiming to protect secure data from cyber attackers. It would also be significant to individuals who have provided their confidential data to companies who utilize such systems. In the primary data, 12 themes emerged. The themes were Air Gap Weaknesses Same as Other Systems, Misconfiguration of Cloud Settings, Insider Threat as Attack Vector, Phishing as Attack Vector, Software as Attack Vector, and Physical Media as Attack Vector, Lack of Reaction to Breaches, Better Authentication to Prevent Breaches, Communications, and Training in Response to Breach, Specific Responses to Specific Problems, Greater Separation of Risk from User End, and Greater Separation of Risk from Service End. For secondary data, AWS had four themes, Microsoft Azure had two, and both Google Cloud and Oracle had three.</p>
114

Introducing Generative Artificial Intelligence in Tech Organizations : Developing and Evaluating a Proof of Concept for Data Management powered by a Retrieval Augmented Generation Model in a Large Language Model for Small and Medium-sized Enterprises in Tech / Introducering av Generativ Artificiell Intelligens i Tech Organisationer : Utveckling och utvärdering av ett Proof of Concept för datahantering förstärkt av en Retrieval Augmented Generation Model tillsammans med en Large Language Model för små och medelstora företag inom Tech

Lithman, Harald, Nilsson, Anders January 2024 (has links)
In recent years, generative AI has made significant strides, likely leaving an irreversible mark on contemporary society. The launch of OpenAI's ChatGPT 3.5 in 2022 manifested the greatness of the innovative technology, highlighting its performance and accessibility. This has led to a demand for implementation solutions across various industries and companies eager to leverage these new opportunities generative AI brings. This thesis explores the common operational challenges faced by a small-scale Tech Enterprise and, with these challenges identified, examines the opportunities that contemporary generative AI solutions may offer. Furthermore, the thesis investigates what type of generative technology is suitable for adoption and how it can be implemented responsibly and sustainably. The authors approach this topic through 14 interviews involving several AI researchers and the employees and executives of a small-scale Tech Enterprise, which served as a case company, combined with a literature review.  The information was processed using multiple inductive thematic analyses to establish a solid foundation for the investigation, which led to the development of a Proof of Concept. The findings and conclusions of the authors emphasize the high relevance of having a clear purpose for the implementation of generative technology. Moreover, the authors predict that a sustainable and responsible implementation can create the conditions necessary for the specified small-scale company to grow.  When the authors investigated potential operational challenges at the case company it was made clear that the most significant issue arose from unstructured and partially absent documentation. The conclusion reached by the authors is that a data management system powered by a Retrieval model in a LLM presents a potential path forward for significant value creation, as this solution enables data retrieval functionality from unstructured project data and also mitigates a major inherent issue with the technology, namely, hallucinations. Furthermore, in terms of implementation circumstances, both empirical and theoretical findings suggest that responsible use of generative technology requires training; hence, the authors have developed an educational framework named "KLART".  Moving forward, the authors describe that sustainable implementation necessitates transparent systems, as this increases understanding, which in turn affects trust and secure use. The findings also indicate that sustainability is strongly linked to the user-friendliness of the AI service, leading the authors to emphasize the importance of HCD while developing and maintaining AI services. Finally, the authors argue for the value of automation, as it allows for continuous data and system updates that potentially can reduce maintenance.  In summary, this thesis aims to contribute to an understanding of how small-scale Tech Enterprises can implement generative AI technology sustainably to enhance their competitive edge through innovation and data-driven decision-making.

Page generated in 0.0324 seconds