• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 86
  • 73
  • 48
  • 43
  • 32
  • 25
  • 24
  • 22
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

FROM APPLICATION OF ORGANIC THIN MULTILAYER FILMS IN 3D OPTICAL DATA STORAGE TO THEIR FABRICATION FOR ORGANIC ELECTRONIC DEVICES

Saini, Anuj 01 June 2016 (has links)
No description available.
112

Reconstruction of 3D human facial images using partial differential equations.

Elyan, Eyad, Ugail, Hassan January 2007 (has links)
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
113

Efficient 3D data representation for biometric applications

Ugail, Hassan, Elyan, Eyad January 2007 (has links)
Yes / An important issue in many of today's biometric applications is the development of efficient and accurate techniques for representing related 3D data. Such data is often available through the process of digitization of complex geometric objects which are of importance to biometric applications. For example, in the area of 3D face recognition a digital point cloud of data corresponding to a given face is usually provided by a 3D digital scanner. For efficient data storage and for identification/authentication in a timely fashion such data requires to be represented using a few parameters or variables which are meaningful. Here we show how mathematical techniques based on Partial Differential Equations (PDEs) can be utilized to represent complex 3D data where the data can be parameterized in an efficient way. For example, in the case of a 3D face we show how it can be represented using PDEs whereby a handful of key facial parameters can be identified for efficient storage and verification.
114

Data management strategies in the retail sector : Unlocking the potential of cost-effective datamanagement for retail companies

Gamstorp, Viktor, Olausson, Simon January 2024 (has links)
In today's digital landscape, data is akin to oil, pivotal for decision-making and innovation, especially in retail with its vast customer data. However, accumulating data presents challenges, notably in costeffective management. This thesis explores strategies for retail firms to optimize data management without sacrificing the data’s potential benefits. Drawing insights from interviews with five retail companies and implementing a product recommendation model. The study reveals that while storage costs are perceived as low, the prevalent "store it all" approach results in storing vast amounts of unused data, incurring unnecessary expenses. Furthermore, compliance with GDPR primarily shapes companies’ data retention policies, with some companies opting for automated deletion or anonymization to align with regulations. However, inconsistencies exist in practice regarding data storage intentions. The thesis culminates in a strategic framework to enhance data management. A four-step framework is proposed: assessing data lifespan, implementing archiving routines, anonymizing and aggregating data, and evaluating cost versus utility. The research underscores the need for deletion strategies to prevent data overload and maintain cost-effectiveness. This thesis contributes to understanding data value and offers practical guidance for retail firms to navigate data management efficiently while complying with regulations like GDPR. Future research could delve into the long-term impacts of retention policies on business operations, assessing data deletion or archiving over extended periods. Longitudinal studies with company data access would enrich this exploration.
115

A NEW DATA PROCESSING TECHNIQUE OF PPM/PPK WITHOUT THE REFERENCE PULSE

Xi-Hua, Li 11 1900 (has links)
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes the technical principle that signals conversion, data-processing and data storage are directly carried out without filling up with the reference pulse for PPM and PPK (pulse position keying). By means of analysis for typical frame structure of PPM/PPK signals, a variety of math models of signal time relationship of the system were found, and based on this, a engineering way and a principle block diagram for signals conversion, data processing and data storage were given out.
116

DESIGN OF A RACE CAR TELEMETERING SYSTEM

Ameri, K. Al, Hanson, P., Newell, N., Welker, J., Yu, K, Zain, A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This student paper was produced as part of the team design competition in the University of Arizona course ECE 485, Radiowaves and Telemetry. It describes the design of a telemetering system for race cars. Auto Racing is an exciting sport where the winners are the ones able to optimize the balance between the driver’s skill and the racing teams technology. One of the main reasons for this excitement is that the main component, the race car, is traveling at extremely high speeds and constantly making quick maneuvers. To be able to do this continually, the car itself must be constantly monitored and possibly adjusted to insure proper maintenance and prevent damage. To allow for better monitoring of the car’s performance by the pit crew and other team members, a telemetering system has been designed, which facilitates the constant monitoring and evaluation of various aspects of the car. This telemetering system will provide a way for the speed, engine RPM, engine and engine compartment temperature, oil pressure, tire pressure, fuel level, and tire wear of the car to be measured, transmitted back to the pit, and presented in a way which it can be evaluated and utilized to increase the car’s performance and better its chances of winning the race. Furthermore, this system allows for the storing of the data for later reference and analysis.
117

BlobSeer: Towards efficient data storage management for large-scale, distributed systems

Nicolae, Bogdan 30 November 2010 (has links) (PDF)
With data volumes increasing at a high rate and the emergence of highly scalable infrastructures (cloud computing, petascale computing), distributed management of data becomes a crucial issue that faces many challenges. This thesis brings several contributions in order to address such challenges. First, it proposes a set of principles for designing highly scalable distributed storage systems that are optimized for heavy data access concurrency. In particular, it highlights the potentially large benefits of using versioning in this context. Second, based on these principles, it introduces a series of distributed data and metadata management algorithms that enable a high throughput under concurrency. Third, it shows how to efficiently implement these algorithms in practice, dealing with key issues such as high-performance parallel transfers, efficient maintainance of distributed data structures, fault tolerance, etc. These results are used to build BlobSeer, an experimental prototype that is used to demonstrate both the theoretical benefits of the approach in synthetic benchmarks, as well as the practical benefits in real-life, applicative scenarios: as a storage backend for MapReduce applications, as a storage backend for deployment and snapshotting of virtual machine images in clouds, as a quality-of-service enabled data storage service for cloud applications. Extensive experimentations on the Grid'5000 testbed show that BlobSeer remains scalable and sustains a high throughput even under heavy access concurrency, outperforming by a large margin several state-of-art approaches.
118

API-Based Acquisition of Evidence from Cloud Storage Providers

Barreto, Andres E 11 August 2015 (has links)
Cloud computing and cloud storage services, in particular, pose a new challenge to digital forensic investigations. Currently, evidence acquisition for such services still follows the traditional approach of collecting artifacts on a client device. In this work, we show that such an approach not only requires upfront substantial investment in reverse engineering each service, but is also inherently incomplete as it misses prior versions of the artifacts, as well as cloud-only artifacts that do not have standard serialized representations on the client. In this work, we introduce the concept of API-based evidence acquisition for cloud services, which addresses these concerns by utilizing the officially supported API of the service. To demonstrate the utility of this approach, we present a proof-of-concept acquisition tool, kumodd, which can acquire evidence from four major cloud storage providers: Google Drive, Microsoft One, Dropbox, and Box. The implementation provides both command-line and web user interfaces, and can be readily incorporated into established forensic processes.
119

Curricular Optimization: Solving for the Optimal Student Success Pathway

Thompson-Arjona, William G. 01 January 2019 (has links)
Considering the significant investment of higher education made by students and their families, graduating in a timely manner is of the utmost importance. Delay attributed to drop out or the retaking of a course adds cost and negatively affects a student’s academic progression. Considering this, it becomes paramount for institutions to focus on student success in relation to term scheduling. Often overlooked, complexity of a course schedule may be one of the most important factors in whether or not a student successfully completes his or her degree. More often than not students entering an institution as a first time full time (FSFT) freshman follow the advised and published schedule given by administrators. Providing the optimal schedule that gives the student the highest probability of success is critical. In efforts to create this optimal schedule, this thesis introduces a novel optimization algorithm with the objective to separate courses which when taken together hurt students’ pass rates. Inversely, we combine synergistic relationships that improve a students probability for success when the courses are taken in the same semester. Using actual student data at the University of Kentucky, we categorically find these positive and negative combinations by analyzing recorded pass rates. Using Julia language on top of the Gurobi solver, we solve for the optimal degree plan of a student in the electrical engineering program using a linear and non-linear multi-objective optimization. A user interface is created for administrators to optimize their curricula at main.optimizeplans.com.
120

Applications des processus de Lévy et processus de branchement à des études motivées par l'informatique et la biologie

Bansaye, Vincent 14 November 2008 (has links) (PDF)
Dans une première partie, j'étudie un processus de stockage de données en temps continu où le disque dur est identifié à la droite réelle. Ce modèle est une version continu du problème original de Parking de Knuth. Ici l'arrivée des fichiers est Poissonienne et le fichier se stocke dans les premiers espaces libres à droite de son point d'arrivée, quitte à se fragmenter. Dans un premier temps, je construis le modèle et donne une caractérisation géométrique et analytique de la partie du disque recouverte au temps t. Ensuite j'étudie les régimes asymptotiques au moment de saturation du disque. Enfin, je décris l'évolution en temps d'un block de données typique. La deuxième partie est constituée de l'étude de processus de branchement, motivée par des questions d'infection cellulaire. Dans un premier temps, je considère un processus de branchement en environnement aléatoire sous-critique, et détermine les théorèmes limites en fonction de la population initiale, ainsi que des propriétes sur les environnements, les limites de Yaglom et le Q-processus. Ensuite, j'utilise ce processus pour établir des résultats sur un modèle décrivant la prolifération d'un parasite dans une cellule en division. Je détermine la probabilité de guérison, le nombre asymptotique de cellules inféctées ainsi que les proportions asymptotiques de cellules infectées par un nombre donné de parasites. Ces différents résulats dépendent du régime du processus de branchement en environnement aléatoire. Enfin, j'ajoute une contamination aléatoire par des parasites extérieures.

Page generated in 0.0462 seconds