• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 354
  • 85
  • 42
  • 24
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 9
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 715
  • 715
  • 408
  • 303
  • 302
  • 213
  • 120
  • 106
  • 96
  • 95
  • 94
  • 84
  • 59
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
701

Squelettes algorithmiques pour la programmation et l'exécution efficaces de codes parallèles

Legaux, Joeffrey 13 December 2013 (has links) (PDF)
Les architectures parallèles sont désormais présentes dans tous les matériels informatiques, mais les pro- grammeurs ne sont généralement pas formés à leur programmation dans les modèles explicites tels que MPI ou les Pthreads. Il y a un besoin important de modèles plus abstraits tels que les squelettes algorithmiques qui sont une approche structurée. Ceux-ci peuvent être vus comme des fonctions d'ordre supérieur synthétisant le comportement d'algorithmes parallèles récurrents que le développeur peut ensuite combiner pour créer ses programmes. Les développeurs souhaitent obtenir de meilleures performances grâce aux programmes parallèles, mais le temps de développement est également un facteur très important. Les approches par squelettes algorithmiques fournissent des résultats intéressants dans ces deux aspects. La bibliothèque Orléans Skeleton Library ou OSL fournit un ensemble de squelettes algorithmiques de parallélisme de données quasi-synchrones dans le langage C++ et utilise des techniques de programmation avancées pour atteindre une bonne efficacité. Nous avons amélioré OSL afin de lui apporter de meilleures performances et une plus grande expressivité. Nous avons voulu analyser le rapport entre les performances des programmes et l'effort de programmation nécessaire sur OSL et d'autres modèles de programmation parallèle. La comparaison rigoureuse entre des programmes parallèles dans OSL et leurs équivalents de bas niveau montre une bien meilleure productivité pour les modèles de haut niveau qui offrent une grande facilité d'utilisation tout en produisant des performances acceptables.
702

A contribution to semantic description of images and videos: an application of soft biometrics / Uma contribuição para descrição semântica de imagens e vídeos: uma aplicação de biometrias fracas

Perlin, Hugo Alberto 08 December 2015 (has links)
Fundação Araucária / Os seres humanos possuem uma alta capacidade de extrair informações de dados visuais, adquiridos por meio da visão. Através de um processo de aprendizado, que se inicia ao nascer e continua ao longo da vida, a interpretação de imagens passa a ser feita de maneira quase instintiva. Em um relance, uma pessoa consegue facilmente descrever com certa precisão os componentes principais que compõem uma determinada cena. De maneira geral, isto é feito extraindo-se características de baixo nível, como arestas, texturas e formas, e associando-as com significados de alto nível. Ou seja, realiza-se uma descrição semântica desta cena. Um exemplo disto é a capacidade de reconhecer outras pessoas e descrever suas características físicas e comportamentais. A área de visão computacional tem como principal objetivo desenvolver métodos capazes de realizar interpretação visual com desempenho similar aos humanos. Estes métodos englobam conhecimento de aprendizado de máquina e processamento de imagens. Esta tese tem como objetivo propor métodos de visão computacional que permitam a extração de informações de alto nível na forma de biometrias leves. Estas biometrias representam características inerentes ao corpo e ao comportamento humano. Porém, não permitem a identificação unívoca de uma pessoa. Para tanto, este problema foi abordado de duas formas, aprendizado não-supervisionado e supervisionado. A primeira busca agrupar as imagens através de um processo de aprendizado automático de extração de características, empregando técnicas de convoluções, computação evolucionária e clusterização. Nesta abordagem as imagens utilizadas contém faces e pessoas. A segunda abordagem emprega redes neurais convolucionais, que possuem a capacidade de operar sobre imagens cruas, aprendendo tanto o processo de extração de características quanto a classificação. Aqui as imagens são classificadas de acordo com gênero e roupas, divididas em parte superior e inferior do corpo humano. A primeira abordagem, quando testada com diferentes bancos de imagens, obteve uma acurácia de aproximadamente 80% para faces e não-faces e 70% para pessoas e não-pessoas. A segunda, testada utilizando imagens e vídeos, obteve uma acurácia de cerca de 70% para gênero, 80% para roupas da parte superior e 90% para a parte inferior. Os resultados destes estudos de casos, mostram que os métodos propostos são promissores, permitindo a realização de anotação automática de informações de alto nível. Isto abre possibilidades para o desenvolvimento de aplicações em diversas áreas, como busca de imagens e vídeos baseada em conteúdo e segurança por vídeo, reduzindo o esforço humano nas tarefas de anotação manual e monitoramento. / Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
703

Efficient object versioning for object-oriented languages from model to language integration

Pluquet, Frédéric 03 July 2012 (has links)
Tout le monde a déjà rencontré la fonctionnalité ``Undo/Redo' qui permet de se balader dans les versions précédentes d'un document. Bien que le versioning -- sauver et parcourir plusieurs versions d'entités données -- est nécessaire pour beaucoup d'applications, il est difficile de l'implémenter facilement et efficacement en temps et en espace utilisés. Dans cette thèse, nous présentons un système de versioning efficace et expressif pour les langages orientés objet. <p><p>Nous commencons par développer un modèle qui permet au développeur de sélectionner avec précision les parties intéressantes de son système qui seront sauvegardées à des moments clefs. Ce modèle permet de parcourir facilement les différentes versions enregistrées et de faire cohabiter aisément les parties versionnées avec les parties non sélectionnées par le développeur. Ce modèle est de plus compatible avec trois types de versioning (linear, backtracking et branching versioning) qui permettent des opérations diverses sur la ligne du temps, comme supprimer toutes les versions après une version donnée ou créer une nouvelle branche à partir d'une ancienne version. <p><p>Ensuite nous développons les structures efficaces en temps et en espace qui implémentent ce modèle dans un monde réel. Basées sur les travaux de Driscoll et al. elles sont adaptées aux spécificités de chaque type de versioning. <p><p>Nous montrons ensuite comment ce système peut être intégré concrètement dans un langage orienté object. Plus précisément, nous montrons comment notre système peut être intégré de façon transparente pour le développeur grâce à des outils tels que les aspects ou la transformation de bytecodes. <p><p>Pour valider nos propos, nous avons implémenté notre système dans les langages de programmation Smalltalk et Java. Nous montrons des applications réelles qui l'utilisent, telles que les post-conditions à états et le problème du planar point location. <p><p>Nous terminons cette thèse par évaluer l'efficacité de notre implémentation en effectuant des benchmarks détaillés en Smalltalk et en Java. Nous avons notamment étudié l'espace pris par nos structures données et le temps d'éxecution de chaque opération de versioning. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
704

J2EE vs. Microsoft Dot Net: A Qualitative and Quantitative Comparison for Building Enterprises Supporting XML-based Web Services

Clark, Raquel V. 01 January 2003 (has links)
Increasing speed of networks and worldwide availability has made the World Wide Web the most significant medium for information exchange. Web technologies have become more and more important as large and small businesses continue to make their presence on the web. Today's businesses have more than just a "face" on the worldwide web. The use of a web browser is no longer restricted to viewing static pages. Browsers are becoming more and more a standard interface to a multifaceted reign of programs that live on the worldwide web. Two main technologies stand out for the implementation of web applications, Sun Microsystems' Java 2 Enterprise Edition (J2EE) and Microsoft' Dot Net Framework. The purpose of this study is to provide an unbiased comparison of the two technologies based on performance and other software qualities.
705

An empirical study: Usage of the Unified Modeling Language in the Bachelor of Science and Master of Science degree programs at California State University, San Bernardino

Farquhar, Cynthia Patrice 01 January 2005 (has links)
The Unified Modeling Language (UML) became part of the curriculum in the Department of Computer Science at California State University, San Bernardino (CSUSB) in September 1997. The intent was to integrate the object-oriented paradigm in the undergraduate courses. Subsequently, this use has shifted to the graduate level. The purpose of this thesis is: 1) to determine what the students know about UML, 2) to reveal if the students were using UML, 3) to clarify how students use the UML.
706

Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge

Contreras Ochando, Lidia 04 February 2021 (has links)
[ES] El proceso de ciencia de datos es esencial para extraer valor de los datos. Sin embargo, la parte más tediosa del proceso, la preparación de los datos, implica una serie de formateos, limpieza e identificación de problemas que principalmente son tareas manuales. La preparación de datos todavía se resiste a la automatización en parte porque el problema depende en gran medida de la información del dominio, que se convierte en un cuello de botella para los sistemas de última generación a medida que aumenta la diversidad de dominios, formatos y estructuras de los datos. En esta tesis nos enfocamos en generar algoritmos que aprovechen el conocimiento del dominio para la automatización de partes del proceso de preparación de datos. Mostramos la forma en que las técnicas generales de inducción de programas, en lugar de los lenguajes específicos del dominio, se pueden aplicar de manera flexible a problemas donde el conocimiento es importante, mediante el uso dinámico de conocimiento específico del dominio. De manera más general, sostenemos que una combinación de enfoques de aprendizaje dinámicos y basados en conocimiento puede conducir a buenas soluciones. Proponemos varias estrategias para seleccionar o construir automáticamente el conocimiento previo apropiado en varios escenarios de preparación de datos. La idea principal se basa en elegir las mejores primitivas especializadas de acuerdo con el contexto del problema particular a resolver. Abordamos dos escenarios. En el primero, manejamos datos personales (nombres, fechas, teléfonos, etc.) que se presentan en formatos de cadena de texto muy diferentes y deben ser transformados a un formato unificado. El problema es cómo construir una transformación compositiva a partir de un gran conjunto de primitivas en el dominio (por ejemplo, manejar meses, años, días de la semana, etc.). Desarrollamos un sistema (BK-ADAPT) que guía la búsqueda a través del conocimiento previo extrayendo varias meta-características de los ejemplos que caracterizan el dominio de la columna. En el segundo escenario, nos enfrentamos a la transformación de matrices de datos en lenguajes de programación genéricos como R, utilizando como ejemplos una matriz de entrada y algunas celdas de la matriz de salida. También desarrollamos un sistema guiado por una búsqueda basada en árboles (AUTOMAT[R]IX) que usa varias restricciones, probabilidades previas para las primitivas y sugerencias textuales, para aprender eficientemente las transformaciones. Con estos sistemas, mostramos que la combinación de programación inductiva, con la selección dinámica de las primitivas apropiadas a partir del conocimiento previo, es capaz de mejorar los resultados de otras herramientas actuales específicas para la preparación de datos. / [CA] El procés de ciència de dades és essencial per extraure valor de les dades. No obstant això, la part més tediosa del procés, la preparació de les dades, implica una sèrie de transformacions, neteja i identificació de problemes que principalment són tasques manuals. La preparació de dades encara es resisteix a l'automatització en part perquè el problema depén en gran manera de la informació del domini, que es converteix en un coll de botella per als sistemes d'última generació a mesura que augmenta la diversitat de dominis, formats i estructures de les dades. En aquesta tesi ens enfoquem a generar algorismes que aprofiten el coneixement del domini per a l'automatització de parts del procés de preparació de dades. Mostrem la forma en què les tècniques generals d'inducció de programes, en lloc dels llenguatges específics del domini, es poden aplicar de manera flexible a problemes on el coneixement és important, mitjançant l'ús dinàmic de coneixement específic del domini. De manera més general, sostenim que una combinació d'enfocaments d'aprenentatge dinàmics i basats en coneixement pot conduir a les bones solucions. Proposem diverses estratègies per seleccionar o construir automàticament el coneixement previ apropiat en diversos escenaris de preparació de dades. La idea principal es basa a triar les millors primitives especialitzades d'acord amb el context del problema particular a resoldre. Abordem dos escenaris. En el primer, manegem dades personals (noms, dates, telèfons, etc.) que es presenten en formats de cadena de text molt diferents i han de ser transformats a un format unificat. El problema és com construir una transformació compositiva a partir d'un gran conjunt de primitives en el domini (per exemple, manejar mesos, anys, dies de la setmana, etc.). Desenvolupem un sistema (BK-ADAPT) que guia la cerca a través del coneixement previ extraient diverses meta-característiques dels exemples que caracteritzen el domini de la columna. En el segon escenari, ens enfrontem a la transformació de matrius de dades en llenguatges de programació genèrics com a R, utilitzant com a exemples una matriu d'entrada i algunes dades de la matriu d'eixida. També desenvolupem un sistema guiat per una cerca basada en arbres (AUTOMAT[R]IX) que usa diverses restriccions, probabilitats prèvies per a les primitives i suggeriments textuals, per aprendre eficientment les transformacions. Amb aquests sistemes, mostrem que la combinació de programació inductiva amb la selecció dinàmica de les primitives apropiades a partir del coneixement previ, és capaç de millorar els resultats d'altres enfocaments de preparació de dades d'última generació i més específics. / [EN] Data science is essential for the extraction of value from data. However, the most tedious part of the process, data wrangling, implies a range of mostly manual formatting, identification and cleansing manipulations. Data wrangling still resists automation partly because the problem strongly depends on domain information, which becomes a bottleneck for state-of-the-art systems as the diversity of domains, formats and structures of the data increases. In this thesis we focus on generating algorithms that take advantage of the domain knowledge for the automation of parts of the data wrangling process. We illustrate the way in which general program induction techniques, instead of domain-specific languages, can be applied flexibly to problems where knowledge is important, through the dynamic use of domain-specific knowledge. More generally, we argue that a combination of knowledge-based and dynamic learning approaches leads to successful solutions. We propose several strategies to automatically select or construct the appropriate background knowledge for several data wrangling scenarios. The key idea is based on choosing the best specialised background primitives according to the context of the particular problem to solve. We address two scenarios. In the first one, we handle personal data (names, dates, telephone numbers, etc.) that are presented in very different string formats and have to be transformed into a unified format. The problem is how to build a compositional transformation from a large set of primitives in the domain (e.g., handling months, years, days of the week, etc.). We develop a system (BK-ADAPT) that guides the search through the background knowledge by extracting several meta-features from the examples characterising the column domain. In the second scenario, we face the transformation of data matrices in generic programming languages such as R, using an input matrix and some cells of the output matrix as examples. We also develop a system guided by a tree-based search (AUTOMAT[R]IX) that uses several constraints, prior primitive probabilities and textual hints to efficiently learn the transformations. With these systems, we show that the combination of inductive programming with the dynamic selection of the appropriate primitives from the background knowledge is able to improve the results of other state-of-the-art and more specific data wrangling approaches. / This research was supported by the Spanish MECD Grant FPU15/03219;and partially by the Spanish MINECO TIN2015-69175-C4-1-R (Lobass) and RTI2018-094403-B-C32-AR (FreeTech) in Spain; and by the ERC Advanced Grant Synthesising Inductive Data Models (Synth) in Belgium. / Contreras Ochando, L. (2020). Towards Data Wrangling Automation through Dynamically-Selected Background Knowledge [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160724 / TESIS
707

Determinants of Health Care Use Among Rural, Low-Income Mothers and Children: A Simultaneous Systems Approach to Negative Binomial Regression Modeling

Valluri, Swetha 01 January 2011 (has links) (PDF)
The determinants of health care use among rural, low-income mothers and their children were assessed using a multi-state, longitudinal data set, Rural Families Speak. The results indicate that rural mothers’ decisions regarding health care utilization for themselves and for their child can be best modeled using a simultaneous systems approach to negative binomial regression. Mothers’ visits to a health care provider increased with higher self-assessed depression scores, increased number of child’s doctor visits, greater numbers of total children in the household, greater numbers of chronic conditions, need for prenatal or post-partum care, development of a new medical condition, and having health insurance (Medicaid/equivalent and HMO/private). Child’s visits to a health care provider, on the other hand, increased with greater numbers of chronic conditions, development of a new medical condition, and increased mothers’ visits to a doctor. Child’s utilization of pediatric health care services decreased with higher levels of maternal depression, greater numbers of total children in the household, if the mother had HMO/private health care coverage, if the mother was pregnant, and if the mother was Latina/African American. Mother’s use of health care services decreased with her age, increased number of child’s chronic conditions, income as a percent of the federal poverty line, and if child had HMO/private health care insurance. The study expands the econometric techniques available for assessing maternal and pediatric health care use and the results contribute to an understanding of how rural, low-income mothers choose the level of health care services use for themselves and for their child. Additionally, the results would assist in formulating policies to reorient the type of health care services provided to this vulnerable population.
708

CyberWater: An open framework for data and model integration

Ranran Chen (18423792) 03 June 2024 (has links)
<p dir="ltr">Workflow management systems (WMSs) are commonly used to organize/automate sequences of tasks as workflows to accelerate scientific discoveries. During complex workflow modeling, a local interactive workflow environment is desirable, as users usually rely on their rich, local environments for fast prototyping and refinements before they consider using more powerful computing resources.</p><p dir="ltr">This dissertation delves into the innovative development of the CyberWater framework based on Workflow Management Systems (WMSs). Against the backdrop of data-intensive and complex models, CyberWater exemplifies the transition of intricate data into insightful and actionable knowledge and introduces the nuanced architecture of CyberWater, particularly focusing on its adaptation and enhancement from the VisTrails system. It highlights the significance of control and data flow mechanisms and the introduction of new data formats for effective data processing within the CyberWater framework.</p><p dir="ltr">This study presents an in-depth analysis of the design and implementation of Generic Model Agent Toolkits. The discussion centers on template-based component mechanisms and the integration with popular platforms, while emphasizing the toolkit’s ability to facilitate on-demand access to High-Performance Computing resources for large-scale data handling. Besides, the development of an asynchronously controlled workflow within CyberWater is also explored. This innovative approach enhances computational performance by optimizing pipeline-level parallelism and allows for on-demand submissions of HPC jobs, significantly improving the efficiency of data processing.</p><p dir="ltr">A comprehensive methodology for model-driven development and Python code integration within the CyberWater framework and innovative applications of GPT models for automated data retrieval are introduced in this research as well. It examines the implementation of Git Actions for system automation in data retrieval processes and discusses the transformation of raw data into a compatible format, enhancing the adaptability and reliability of the data retrieval component in the adaptive generic model agent toolkit component.</p><p dir="ltr">For the development and maintenance of software within the CyberWater framework, the use of tools like GitHub for version control and outlining automated processes has been applied for software updates and error reporting. Except that, the user data collection also emphasizes the role of the CyberWater Server in these processes.</p><p dir="ltr">In conclusion, this dissertation presents our comprehensive work on the CyberWater framework's advancements, setting new standards in scientific workflow management and demonstrating how technological innovation can significantly elevate the process of scientific discovery.</p>
709

Conception d'un Pro Logiciel Interactif sous R pour la Simulation de Processus de Diffusion

Guidoum, Arsalane 25 February 2012 (has links) (PDF)
Dans ce travail, on propose un nouveau package Sim.DiffProc pour la simulation des processus de diffusion, muni d'une interface graphique (GUI), sous langage R. Le développement de l'outil informatique (logiciels et matériels) ces dernières années, nous a motivé de réaliser ce travail. A l'aide de ce package, nous pouvons traiter beaucoup de problèmes théoriques difficiles liée à l'utilisation des processus de diffusion, pour des recherches pratiques, tels que la simulation numérique trajectoires de la solution d'une ÉDS. Ce qui permet à beaucoup d'utilisateurs dans différents domaines à l'employer comme outil sophistiqué à la modélisation de leurs problèmes pratiques. Le problème de dispersion d'un polluant, en présence d'un domaine attractif que nous avons traité dans ce travail en est un bon exemple. Cet exemple montre l'utilité et l'importance pratique des processus de diffusion dans la modélisation simulation de situations réelles complexes. La fonction de densité de la variable aléatoire tau(c) "instant de premier passage" de la frontière de domaine d'attraction peut être utilisée pour déterminer le taux de concentration des particules polluantes à l'intérieur du domaine. Les études de simulation et les analyses statistiques mises en application à l'aide du package Sim.DiffProc, se présentent efficaces et performantes, comparativement aux résultats théoriques explicitement ou approximativement déterminés par les modèles de processus de diffusion considérés.
710

AspectKE*: Security aspects with program analysis for distributed systems

Fan, Yang, Masuhara, Hidehiko, Aotani, Tomoyuki, Nielson, Flemming, Nielson, Hanne Riis January 2010 (has links)
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.

Page generated in 0.0435 seconds