• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1015
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2076
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Predicting Graft Loss Following Acute Kidney Injury in Patients With a Kidney Transplant

Molnar, Amber January 2016 (has links)
Acute kidney injury (AKI), characterized by an abrupt loss of kidney function with retention of nitrogenous waste products, is common in the months to years following kidney transplantation and is associated with an increased risk of transplant failure (graft loss). Kidney transplant patients who experience graft loss and return to dialysis have an increased mortality risk and a lower quality of life. Research involving kidney transplant patients can prove challenging, as they are relatively small in number. To increase statistical power, researchers may utilize administrative databases. However, these databases are not designed primarily for research, and knowledge of their limitations is needed, as significant bias can occur. When using administrative databases to study AKI in kidney transplantation, the method used to define AKI should be carefully considered. The power of a study may be greatly increased if AKI can be accurately defined using administrative diagnostic codes because data on AKI will be universally available for all patients in the database. However, the methods by which diagnostic codes are assigned to a patient allow for error to be introduced. We confirmed that, when compared to the gold standard definition for AKI of a rise in serum creatinine, the diagnostic code for AKI has low sensitivity but high specificity in the kidney transplant population (the best performing coding algorithm had a sensitivity of 42.9% (95% CI 29.7, 56.8) and specificity of 89.3% (95% CI 86.2, 91.8) (Chapter 3). We therefore determined that for the study outlined in Chapter 4, defining AKI using diagnostic codes would significantly under-­capture AKI and misclassify patients. We decided to define AKI using only serum creatinine criteria even though this would limit our sample size (creatinine data was only available for a subset of patients in the administrative databases). In Chapter 4, we derived an index score to predict the risk of graft loss in kidney transplant patients following an admission to hospital with AKI. The index includes six readily available, objective clinical variables that increased the risk of graft loss: increasing age, increased severity of AKI (as defined by the AKIN staging system), failure to recover from AKI, lower baseline estimated glomerular filtration rate, increased time from kidney transplant to AKI admission, and deceased donor. The derived index requires validation in order to assess its utility in the clinical realm.
562

Secure multimedia databases.

Pedroncelli, Antony 02 June 2008 (has links)
A message can be communicated to other people using a combination of pictures, sounds, and actions. Ensuring that the message is understood as intended often depends on the presentation of these forms of multimedia. In today’s digital world, traditional multimedia artefacts such as paintings, photographs, audiotapes and videocassettes, although still used, are gradually being replaced with a digital equivalent. It is normally easy to duplicate these digital multimedia files, and they are often available within public repositories. Although this has its advantages, security may be a concern, especially for sensitive multimedia data. Information security services such as identification and authentication, authorisation, and confidentiality can be implemented to secure the data at the file level, ensuring that only authorised entities gain access to the entire multimedia file. It may not always be the case however that a message must be conveyed in the same way for every entity (user or program) that makes a request for the multimedia data. Although access control measures can be ensured for the multimedia at the file level, very little work has been done to ensure access control for multimedia at the content level. A number of models will be presented in this dissertation that should ensure logical access control at the content level for the three main types of multimedia, namely images, audio, and video. In all of these models, the multimedia data is securely stored in a repository, while the associated security information is stored in a database. The objects that contain the authorisation information are created through an interface that securely communicates with the database. Requests are made through another secure interface, where only the authorised multimedia data will be assembled according to the requesting entity’s security classification. Certain important side issues concerning the secure multimedia models will also be discussed. This includes security issues surrounding the model components and suspicion i.e. reducing the probability that a requesting entity would come to the conclusion that changes were made to the original multimedia data. / Prof. M.S. Olivier
563

Geospatial Data Indexing Analysis and Visualization via Web Services with Autonomic Resource Management

Lu, Yun 07 November 2013 (has links)
With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.
564

Tratamento de condições especiais para busca por similaridade em bancos de dados complexos / Treatment of special conditional for similarity searching in complex data bases

Daniel dos Santos Kaster 23 April 2012 (has links)
A quantidade de dados complexos (imagens, vídeos, séries temporais e outros) tem crescido rapidamente. Dados complexos são adequados para serem recuperados por similaridade, o que significa definir consultas de acordo com um dado critério de similaridade. Além disso, dados complexos usualmente são associados com outras informações, geralmente de tipos de dados convencionais, que devem ser utilizadas em conjunto com operações por similaridade para responder a consultas complexas. Vários trabalhos propuseram técnicas para busca por similaridade, entretanto, a maioria das abordagens não foi concebida para ser integrada com um SGBD, tratando consultas por similaridade como operações isoladas, disassociadas do processador de consultas. O objetivo principal desta tese é propor alternativas algébricas, estruturas de dados e algoritmos para permitir um uso abrangente de consultas por similaridade associadas às demais operações de busca disponibilizadas pelos SGBDs relacionais e executar essas consultas compostas eficientemente. Para alcançar este objetivo, este trabalho apresenta duas contribuições principais. A primeira contribuição é a proposta de uma nova operação por similaridade, chamada consulta aos k-vizinhos mais próximos estendida com condições (ck-NNq), que estende a consulta aos k-vizinhos mais próximos (k-\'NN SUB. q\') de maneira a fornecer uma condição adicional, modificando a semântica da operação. A operação proposta permite representar consultas demandadas por várias aplicações, que não eram capazes de ser representadas anteriormente, e permite homogeneamente integrar condições de filtragem complementares à k-\'NN IND.q\'. A segunda contribuição é o desenvolvimento do FMI-SiR (user-defined Features, Metrics and Indexes for Similarity Retrieval ), que é um módulo de banco de dados que permite executar consultas por similaridade integradas às demais operações do SGBD. O módulo permite incluir métodos de extração de características e funções de distância definidos pelo usuário no núcleo do gerenciador de banco de dados, fornecendo grande exibilidade, e também possui um tratamento especial para imagens médicas. Além disso, foi verificado através de experimentos sobre bancos de dados reais que a implementação do FMI-SiR sobre o SGBD Oracle é capaz de consultar eficientemente grandes bancos de dados complexos / The amount of complex data (images, videos, time series and others) has been growing at a very fast pace. Complex data are well-suited to be searched by similarity, which means to define queries according to a given similarity criterion. Moreover, complex data are usually associated with other information, usually of conventional data types, which must be employed in conjunction with similarity operations to answer complex queries. Several works proposed techniques for similarity searching, however, the majority of the approaches was not conceived to be integrated into a DBMS, treating similarity queries as isolated operations detached from the query processor. The main objective of this thesis is to propose algebraic alternatives, data structures and algorithms to allow a wide use of similarity queries associated to the search operations provided by the relational DBMSs and to execute such composite queries eficiently. To reach this goal, this work presents two main contributions. The first contribution is the proposal of a new similarity operation, called condition-extended k-Nearest Neighbor query (ck-\'NN IND. q\'), that extends the k-Nearest Neighbor query (k-\'NN IND. q\') to provide an additional conditio modifying the operation semantics. The proposed operation allows representing queries required by several applications, which were not able to be represented before, and allows to homogeneously integrate complementary filtering conditions to the k-\'NN IND. q\'. The second contribution is the development of the FMI-SiR(user-defined Features, Metrics and Indexes for Similarity Retrieval), which is a database module that allows executing similarity queries integrated to the DBMS operations. The module allows including user-defined feature extraction methods and distance functions into the database core, providing great exibility, and also has a special treatment for medical images. Moreover, it was verified through experiments over real datasets that the implementation of FMI-SiR over the Oracle DBMS is able to eficiently search very large complex databases
565

Cybersecurity Strategies for Universities With Bring Your Own Device Programs

Nguyen, Hai Vu 01 January 2019 (has links)
The bring your own device (BYOD) phenomenon has proliferated, making its way into different business and educational sectors and enabling multiple vectors of attack and vulnerability to protected data. The purpose of this multiple-case study was to explore the strategies information technology (IT) security professionals working in a university setting use to secure an environment to support BYOD in a university system. The study population was comprised of IT security professionals from the University of California campuses currently managing a network environment for at least 2 years where BYOD has been implemented. Protection motivation theory was the study's conceptual framework. The data collection process included interviews with 10 IT security professionals and the gathering of publicly-accessible documents retrieved from the Internet (n = 59). Data collected from the interviews and member checking were triangulated with the publicly-accessible documents to identify major themes. Thematic analysis with the aid of NVivo 12 Plus was used to identify 4 themes: the ubiquity of BYOD in higher education, accessibility strategies for mobile devices, the effectiveness of BYOD strategies that minimize risk, and IT security professionals' tasks include identifying and implementing network security strategies. The study's implications for positive social change include increasing the number of users informed about cybersecurity and comfortable with defending their networks against foreign and domestic threats to information security and privacy. These changes may mitigate and reduce the spread of malware and viruses and improve overall cybersecurity in BYOD-enabled organizations.
566

Probabilistic Algorithms, Lean Methodology Techniques, and Cell Optimization Results

McCurrey, Michael 01 January 2019 (has links)
There is a significant technology deficiency within the U.S. manufacturing industry compared to other countries. To adequately compete in the global market, lean manufacturing organizations in the United States need to look beyond their traditional methods of evaluating their processes to optimize their assembly cells for efficiency. Utilizing the task-technology fit theory this quantitative correlational study examined the relationships among software using probabilistic algorithms, lean methodology techniques, and manufacturer cell optimization results. Participants consisted of individuals performing the role of the systems analyst within a manufacturing organization using lean methodologies in the Southwestern United States. Data were collected from 118 responses from systems analysts through a survey instrument, which was an integration of two instruments with proven reliability. Multiple regression analysis revealed significant positive relationships among software using probabilistic algorithms, lean methodology, and cell optimization results. These findings may provide management with information regarding the skillsets required for systems analysts to implement software using probabilistic algorithms and lean manufacturing techniques to improve cell optimization results. The findings of this study may contribute to society through the potential to bring sustainable economic improvement to impoverished communities through the implementation of efficient manufacturing solutions with lower capital expenditures.
567

Exploring Industry Cybersecurity Strategy in Protecting Critical Infrastructure

Boutwell, Mark 01 January 2019 (has links)
Successful attacks on critical infrastructure have increased in occurrence and sophistication. Many cybersecurity strategies incorporate conventional best practices but often do not consider organizational circumstances and nonstandard critical infrastructure protection needs. The purpose of this qualitative multiple case study was to explore cybersecurity strategies used by information technology (IT) managers and compliance officers to mitigate cyber threats to critical infrastructure. The population for this study comprised IT managers and compliance officers of 4 case organizations in the Pacific Northwest United States. The routine activity theory developed by criminologist Cohen and Felson in 1979 was used as the conceptual framework. Data collection consisted of interviews with 2 IT managers, 3 compliance officers, and 25 documents related to cybersecurity and associated policy governance. A software tool was used in a thematic analysis approach against the data collected from the interviews and documentation. Data triangulation revealed 4 major themes: a robust workforce training program is crucial, make infrastructure resiliency a priority, importance of security awareness, and importance of organizational leadership support and investment. This study revealed key strategies that may help improve cybersecurity strategies used by IT and compliance professionals, which can mitigate successful attacks against critical infrastructure. The study findings will contribute to positive social change through an exploration and contextual analysis of cybersecurity strategy with situational awareness of IT practices to enhance cyber threat mitigation and inform business processes.
568

Accelerating SPARQL Queries and Analytics on RDF Data

Al-Harbi, Razen 09 November 2016 (has links)
The complexity of SPARQL queries and RDF applications poses great challenges on distributed RDF management systems. SPARQL workloads are dynamic and con- sist of queries with variable complexities. Hence, systems that use static partitioning su↵er from communication overhead for workloads that generate excessive communi- cation. Concurrently, RDF applications are becoming more sophisticated, mandating analytical operations that extend beyond SPARQL queries. Being primarily designed and optimized to execute SPARQL queries, which lack procedural capabilities, exist- ing systems are not suitable for rich RDF analytics. This dissertation tackles the problem of accelerating SPARQL queries and RDF analytics on distributed shared-nothing RDF systems. First, a distributed RDF en- gine, coined AdPart, is introduced. AdPart uses lightweight hash partitioning for sharding triples using their subject values; rendering its startup overhead very low. The locality-aware query optimizer of AdPart takes full advantage of the partition- ing to (i) support the fully parallel processing of join patterns on subjects and (ii) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. By exploiting hash- based locality, AdPart achieves better or comparable performance to systems that employ sophisticated partitioning schemes. To cope with workloads dynamism, AdPart is extended to dynamically adapt to workload changes. AdPart monitors the data access patterns and dynamically redis- tributes and replicates the instances of the most frequent patterns among workers.Consequently, the communication cost for future queries is drastically reduced or even eliminated. Experiments with synthetic and real data verify that AdPart starts faster than all existing systems and gracefully adapts to the query load. Finally, to support and accelerate rich RDF analytical tasks, a vertex-centric RDF analytics framework is proposed. The framework, named SPARTex, bridges the gap between RDF and graph processing. To do so, SPARTex: (i) implements a generic SPARQL operator as a vertex-centric program. The operator is coupled with an optimizer that generates e cient execution plans. (ii) It allows SPARQL to invoke vertex-centric programs as stored procedures. Finally, (iii) it provides a unified in- memory data store that allows the persistence of intermediate results. Consequently, SPARTex can e ciently support RDF analytical tasks consisting of complex pipeline of operators.
569

Implementing the GraphQL Interface on top of a Graph Database

Mattsson, Linn January 2020 (has links)
Since becoming an open source project in 2015, GraphQL has gained popularity as it is used as a query language from front-end to back-end, ensuring that no over-fetching or under-fetching is performed. While the query language has been openly available for a few years, there has been little academic research in this area. The aim of this thesis is to create an approach for using GraphQL on top of a graph database, as well as evaluate the optimisation techniques available for this approach. This was done by developing logical plans and query executions plans, and the suitable optimisation technique was found to be parallel execution and batching of database calls. The implementation was done in Java by using graph computing framework Apache TinkerPop, which is compatible with a number of graph databases. However, this implementation focuses on graph database management system Neo4j. To evaluate the implementation, query templates and data from Linköping GraphQL Benchmark was used. The logical plans were created by converting a GraphQL query into a tree of logical operators. The query execution plans were based on four different primitives from the Apache TinkerPop framework, and the physical operators were each influenced by one or more logical operators. The performance tests of the implementation showed that the query execution times were largely dependant on the query template as well as the number of database nodes visited. The pattern between execution times and the number of threads used in the parallel execution was concluded as lower execution times (<100 ms) were improved when 4-6 threads are used, while higher execution times were improved for 12-24 threads used. For the very fast query executions (<5 ms), using threading caused more overhead than the time saved by parallel execution, and for these cases it was better to not use any threading.
570

Exploring Strategies for Implementing Information Security Training and Employee Compliance Practices

Dawson, Alan Robert 01 January 2019 (has links)
Humans are the weakest link in any information security (IS) environment. Research has shown that humans account for more than half of all security incidents in organizations. The purpose of this qualitative case study was to explore the strategies IS managers use to provide training and awareness programs that improve compliance with organizational security policies and reduce the number of security incidents. The population for this study was IS security managers from 2 organizations in Western New York. Information theory and institutional isomorphism were the conceptual frameworks for this study. Data collection was performed using face-to-face interviews with IS managers (n = 3) as well as secondary data analysis of documented IS policies and procedures (n = 28). Analysis and coding of the interview data was performed using a qualitative analysis tool called NVivo, that helped identify the primary themes. Developing IS policy, building a strong security culture, and establishing and maintaining a consistent, relevant, and role-based security awareness and training program were a few of the main themes that emerged from analysis. The findings from this study may drive social change by providing IS managers additional information on developing IS policy, building an IS culture and developing role-specific training and awareness programs. Improved IS practices may contribute to social change by reducing IS risk within organizations as well as reducing personal IS risk with improved IS habits.

Page generated in 0.0281 seconds