• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 100
  • 41
  • 33
  • 25
  • 10
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 392
  • 86
  • 76
  • 76
  • 66
  • 54
  • 52
  • 50
  • 36
  • 32
  • 32
  • 31
  • 30
  • 30
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Guia de expressões para cientistas empreendedores

Pinto, Luiz Cláudio da Silva January 2020 (has links)
Orientador: Rui Seabra Ferreira Júnior / Resumo: A inclusão da universidade na díade indústria e governo tem se mostrado eficiente ao criar um ecossistema de inovação e empreendedorismo, apresentando soluções baseadas em conhecimento para os desafios socioeconômicos. Para isso, o aluno precisa se submeter a situações semelhantes ao que enfrentará na prática, para aprender solucionar problemas sob pressão interagindo com seus pares ou terceiros, aproveitar oportunidades, se inspirar e referenciar com as experiências de outros empreendedores, além de aprender com os próprios erros por meio do feedback de clientes. O objetivo da presente pesquisa foi desenvolver um guia de expressões para cientistas empreendedores que será disponibilizado on line por meio de um aplicativo para uso em diferentes plataformas e dispositivos. Parte das expressões foi listada de acordo com a experiência acadêmica e profissional dos autores, enquanto que outras foram sugeridas por profissionais da área de gestão e tecnologia, além das que surgiram com o decorrer do processo de análise bibliográfica. Todas elas foram referenciadas a partir de artigos científicos publicados em revistas eletrônicas, repositórios acadêmicos, sites de órgãos reguladores, agências promotoras e entidades de classes profissionais. O aplicativo que dá formato ao guia foi desenvolvimento pela equipe do Núcleo de Educação a Distância e Tecnologias da Informação em Saúde - NEAD.TIS da FMB/UNESP. Foi utilizado o acrônimo de Guia de Empreendedorismo para Cientistas Empreendedores... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Inclusion of universities in the industry-government dyad has been showing itself efficient in creating an innovative and entrepreneurial ecosystem, presenting knowledge-based solutions for socioeconomic challenges. For such, students are required to experience situations comparable to those they are going to go through in practice, and then learn how to solve problems under pressure while collaborating with peers and third-parties, seizing opportunities, getting inspired and relating to the experiences of other entrepreneurs, as well as learning with their own mistakes though client feedback. The aim of the present study was to develop a guide of expressions for entrepreneur scientists, which will be made available online through an app accessible from various platforms and devices. Part of the expressions were listed according to the authors‟ academic and professional experience, while others were suggested by professionals of the field of management and technology, along with others that arouse from the process of bibliographical analysis. All of those were referenced from scientific papers published in digital journals, academic repositories, as well as websites of regulatory organs, promoting agencies and professional class entities. The app that shapes the guide was developed by the Center for Distance Education and Information Technology for Health (Núcleo de Educação a Distância e Tecnologias da Informação em Saúde - NEAD.TIS), at the State University of São Paulo – U... (Complete abstract click electronic access below) / Mestre
242

Zdokonalení pravděpodobnostních metod pro lámání hesel / Enhancement of Probabilistic Methods for Password Cracking

Lištiak, Filip January 2019 (has links)
This thesis describes passwords cracking using probabilistic context-free grammars, specifically PCFG Cracker tool. The aim of the thesis is to design and implement enhancements to this tool, which reduce the size of output dictionaries while maintaining acceptable success rate. This work also solves critical parts in the tool that slow down the overall duration of the program. Another goal of the thesis is to analyze and implement targeted attack dictionaries that increase the scope and success rate of generated passwords.
243

Automatická tvorba slovníků z překladových textů / Automatic Creation of Dictionaries from Translations

Musil, Jakub January 2010 (has links)
Aim of this thesis is to implement system for translation words from source language into the target language with pair input texts. There are descriptions  of terms and methods used in machine translation and machine build dictionary. The thesis also contains a concept and specification of each part created system including final evaluation. There is analysed options which make extension of existing dictionatry.
244

Apprentissage de représentations en imagerie fonctionnelle / Learning representations from functional MRI data

Mensch, Arthur 28 September 2018 (has links)
Grâce aux avancées technologiques dans le domaine de l'imagerie fonctionnelle cérébrale, les neurosciences cognitives accumulent une grande quantité de cartes spatiales décrivant de manière quantitative l'activité neuronale suscitée dans le cerveau humain en réponse à des tâches ou des stimuli spécifiques, ou de manière spontanée. Dans cette thèse, nous nous intéressons particulièrement aux données issues de l'imagerie par résonance magnétique fonctionnelle (IRMf), que nous étudions dans un cadre d'apprentissage statistique. Notre objectif est d'apprendre des modèles d'activité cérébrale à partir des données. Nous proposons différentes nouvelles manières de profiter de la grande quantité de données IRMf disponible. Tout d'abord, nous considérons les données d'IRMf de repos, que nous traitons grâce à des méthodes de factorisation de matrices. Nous présentons de nouvelles méthodes pour calculer en un temps raisonnable une factorisation parcimonieuse de matrices constituées de centaines d'enregistrements d'IRMf. Cela nous permet d'extraire des réseaux fonctionnels à partir de données d'une envergure inédite. Notre méthode principale introduit une réduction aléatoire de la dimension des données dans une boucle d'apprentissage en ligne. L'algorithme proposé converge plus de 10 fois plus vite que les meilleures méthodes existantes, pour différentes configurations et sur plusieurs jeux de données. Nous effectuons une vaste validation expérimentale de notre approche de sous-échantillonnage aléatoire. Nous proposons une étude théorique des propriétés de convergence de notre algorithme. Dans un second temps, nous nous intéressons aux données d'IRMf d'activation. Nous démontrons comment agréger différents études acquises suivant des protocoles distincts afin d'apprendre des modèles joints de décodage plus justes et interprétables. Notre modèle multi-études apprend à réduire la dimension des images cérébrales en entrée en même temps qu'il apprend à les classifier, pour chacune des études, à partir de leurs représentations réduites. Cela suscite un transfert d'information entre les études. En conséquence, notre modèle multi-étude est plus performant que les modèles de décodage appris sur chaque étude séparément. Notre approche identifie une représentation universellement pertinente de l'activité cérébrale, supportée par un petit nombre de réseaux optimisés pour l'identification de tâches. / Thanks to the advent of functional brain-imaging technologies, cognitive neuroscience is accumulating maps of neural activity responses to specific tasks or stimuli, or of spontaneous activity. In this work, we consider data from functional Magnetic Resonance Imaging (fMRI), that we study in a machine learning setting: we learn a model of brain activity that should generalize on unseen data. After reviewing the standard fMRI data analysis techniques, we propose new methods and models to benefit from the recently released large fMRI data repositories. Our goal is to learn richer representations of brain activity. We first focus on unsupervised analysis of terabyte-scale fMRI data acquired on subjects at rest (resting-state fMRI). We perform this analysis using matrix factorization. We present new methods for running sparse matrix factorization/dictionary learning on hundreds of fMRI records in reasonable time. Our leading approach relies on introducing randomness in stochastic optimization loops and provides speed-up of an order of magnitude on a variety of settings and datasets. We provide an extended empirical validation of our stochastic subsampling approach, for datasets from fMRI, hyperspectral imaging and collaborative filtering. We derive convergence properties for our algorithm, in a theoretical analysis that reaches beyond the matrix factorization problem. We then turn to work with fMRI data acquired on subject undergoing behavioral protocols (task fMRI). We investigate how to aggregate data from many source studies, acquired with many different protocols, in order to learn more accurate and interpretable decoding models, that predicts stimuli or tasks from brain maps. Our multi-study shared-layer model learns to reduce the dimensionality of input brain images, simultaneously to learning to decode these images from their reduced representation. This fosters transfer learning in between studies, as we learn the undocumented cognitive common aspects that the many fMRI studies share. As a consequence, our multi-study model performs better than single-study decoding. Our approach identifies universally relevant representation of brain activity, supported by a few task-optimized networks learned during model fitting. Finally, on a related topic, we show how to use dynamic programming within end-to-end trained deep networks, with applications in natural language processing.
245

Database forensics : Investigating compromised database management systems

Beyers, Hector Quintus January 2013 (has links)
The use of databases has become an integral part of modern human life. Often the data contained within databases has substantial value to enterprises and individuals. As databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity, negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics, database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS. PROBLEM STATEMENT The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the integrity of the forensic results. APPROACH The study divides the relational DBMS into four abstract layers, illustrates how the layers can be prepared to be either in a found or clean forensic state, and experimentally combines the prepared layers of the DBMS according to the forensic scenario. The study commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore, each layer must be studied individually in a forensic context before all four layers are considered collectively. A forensic study is conducted on each abstract layer of the DBMS that has the potential to influence other layers to deliver incorrect results. Ultimately, the DBMS will be used as a forensic tool to extract evidence from its own encrypted data and data structures. Therefore, the last chapter shall illustrate how a forensic investigator can prepare a trustworthy forensic environment where a forensic investigation could be conducted on an entire PostgreSQL DBMS by constructing a combination of the appropriate forensic states of the abstract layers. RESULTS The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of various states of abstract layers in the DBMS. Approaches are suggested on how to deal with a forensic query on the data model, data dictionary and application schema layer of the DBMS. A forensic process is suggested on how to prepare the DBMS to extract evidence from the DBMS. Another function of this study is that it advises forensic investigators to consider alternative possibilities on how the DBMS could be attacked. These alternatives might not have been considered during investigations on DBMSs to date. Our methods have been tested at hand of a practical example and have delivered promising results. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
246

Kamusi ya Kiswahili sanifu in test:: A computer system for analyzing dictionaries and for retrieving lexical data.

Horskainen, Arvi January 1994 (has links)
The paper describes a computer system for testing the coherence and adequacy of dictionaries. The system suits also well for retiieving lexical material in context from computerized text archives Results are presented from a series of tests made with Kamusi ya Kiswahlli Sanifu (KKS), a monolingual Swahili dictionary.. The test of the intemal coherence of KKS shows that the text itself contains several hundreds of such words, for which there is no entry in the dictionary. Examples and frequency numbers of the most often occurring words are given The adequacy of KKS was also tested with a corpus of nearly one million words, and it was found out that 1.32% of words in book texts were not recognized by KKS, and with newspaper texts the amount was 2.24% The higher number in newspaper texts is partly due to numerous names occurring in news articles Some statistical results are given on frequencies of wordforms not recognized by KKS The tests shows that although KKS covers the modern vocabulary quite well, there are several ru·eas where the dictionary should be improved The internal coherence is far from satisfactory, and there are more than a thousand such rather common words in prose text which rue not included into KKS The system described in this article is au effective tool for `detecting problems and for retrieving lexical data in context for missing words.
247

K lexikografickému zpracování terminologie historických věd z perspektivy němčina-čeština / On the Lexicographical Treatment of Historical Sciences Terminology in a German-Czech Dictionary

Vavřinková, Pavla January 2019 (has links)
This theses focuses on the terminology of History Science and its lexicographical treatment in a German-Czech dictionary of terms. It starts with a problematics of special languages and terminology, followed by characteristics of Czech and German lexicography. It analyses vocabularies of terms and on its basis proposed new concept for them.
248

Korektor diakritiky / Automatic Generator of Diacritics

Veselý, Lukáš January 2007 (has links)
The goal of this diploma work is the suggestion and the implementation of the application, which allows adding / removing of diacritics into / from Czech written text. Retrieval "trie" structure is described along with its relation to finite state automata. Further, algorithm for minimization of finite state automata is described and various methods for adding diacritics are discussed. In practical part the implementation in Java programming language with usage of object-oriented approach is given. Achieved results are evaluated and analysed in the conclusion.
249

Belongings

Larsen, Nickolaus B. 27 August 2019 (has links)
No description available.
250

Лексика коневодства: опыт двуязычного словаря : магистерская диссертация / The vocabulary of horse breeding: a variant of a bilingual dictionary

Снесарь, Н. В., Snesar, N. V. January 2023 (has links)
Данная магистерская диссертация, выполненная в форме проекта, описывает опыт создания автором двуязычного (англо-русского) словаря коневодства. / This master degree project describes the author’s experience of creating an English-Russian dictionary of horse breeding vocabulary.

Page generated in 0.0322 seconds