• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2318
  • 840
  • 301
  • 277
  • 271
  • 204
  • 171
  • 128
  • 61
  • 55
  • 55
  • 47
  • 33
  • 33
  • 20
  • Tagged with
  • 5585
  • 949
  • 527
  • 497
  • 489
  • 484
  • 468
  • 464
  • 452
  • 433
  • 407
  • 396
  • 380
  • 377
  • 327
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Autenticitet i ett öppna data-sammanhang : Utmaningar och möjligheter ur ett arkivvetenskapligt perspektiv

Engvall, Tove January 2012 (has links)
By tradition, archival sciences has emerged in a context with defined information processes, with explicit information producers and custodians to which consumers make requests for the information. In this process, the archival science has developed methods and strategies to preserve authentic and reliable records and by that providing trustworthy information. In an online society, people are using the internet to get information for different purposes. Even though there is no legal obligation to guarantee the authenticity, it is of societal importance that the end users get trustworthy information. In this online context, open data is a trend that is growing fast over the world and it is interesting because its conditions raises many questions regarding authenticity. Since open data is free to reuse, link and combine with other information, and it is preferably in primary format it raises questions about how to maintain the integrity and identity of the information, which is the constituents of authenticity as it is used in this work. The idea in this essay is to discuss challenges with maintaining the authenticity for open data and also identify possible measures to promote authentic open data information on the web, so that the end users get the possibilities to assess its trustworthiness and fit for use. The essay is a qualitative text analyses, with the theoretical base in the InterPARES project results. Open Government Working groups 8 principles and Open Knowledge Foundations definition is discussed, as is also the discussion from other disciplines about provenance on the web, and ideas from digital records forensic. The results indicate that there are great challenges to maintain the authenticity of open data but there are also some solutions. Recorded provenance and traceability are key factors to enable the evaluation of the authenticity. But first the concept authenticity has to be interpreted in a wider sense. There is a need to maintain the authenticity of the parts, the data, in the information. Because the information is used in parts and if the new information created from it will be reliable, it need accurate data with established identity.
282

Results from software engineering research into open source development projects using public data

Koch, Stefan, Schneider, Georg January 2000 (has links) (PDF)
This paper presents first results from research into open source projects from a software engineering perspective. The research methodology employed relies on public data retrieved from the CVS-repository of the GNOME project and relevant discussion groups. This methodology is described in detail and some of the results concerning the special characteristics of open source software development are given. (author's abstract) / Series: Diskussionspapiere zum Tätigkeitsfeld Informationsverarbeitung und Informationswirtschaft
283

Öppen innovation och immaterialrätt ur ett anti-commons perspektiv

Käkelä, Nikolas, Lindblom, Erik January 2014 (has links)
No description available.
284

People in place : a configuration of physical form and the dynamic patterns of spatial occupancy in urban open public space

Goličnik, Barbara January 2005 (has links)
This thesis is a critical inquiry about the spatial relationships between occupancy and the physical structure of squares and parks in city centres. It focuses on usability and the spatial capacity of places, from two different angles. Firstly, it discusses the actual uses mapped in places, using repeated observation on different days, times and weather conditions. This results in empirical knowledge about dimensions and spatial requirements, especially for some long-stay active uses, such as ball games in parks and skateboarding in squares, and how long-stay passive uses, such as sitting, might relate to them, as well as how transitory activities relate to both long-stay engagements. In addition, it illustrates how some activities can be contiguous, while some others require 'buffer' zones between them for effective use. Secondly, this thesis addresses uses imagined in parks and squares by urban landscape designers, using two approaches: mapping out likely uses in detailed maps of selected places, and revealing a physical structure of a particular place by knowing its behavioural patterns. On this basis, this thesis examines designers' tacit knowledge about the usage-spatial relationship and, highlights potential applicability, the role and value of empirically gained knowledge in the design of parks and squares. It shows that designers' beliefs and awareness about uses in places, in some aspects, differ from actual use. From this point of view, it reveals a need for effective design-research integration and stresses the importance of empirical knowledge and its incorporation in design. The thesis promotes GIS as a successful practical tool to build, develop and maintain a body of empirical knowledge using interactive GIS maps as its scripts. Concerning the implementation of such knowledge in urban public open space design, operationally, a visualisation of research findings and its related concerns to decision-making, evaluation and management, is of key importance.
285

Utveckling av verktyg i NX/Teamcenter : Implementering av effektiviserade och automatiserade arbetssätt för hantering av formytor och spegling av presshärdningsverktyg

Hellström, Jonas January 2017 (has links)
Detta examensarbete har genomförts hos Gestamp Hardtech i Luleå på Tooling Design-avdelningen. Idag arbetar företaget med både I-DEAS och NX för att konstruera presshärdningsverktyg som används för att tillverka säkerhetsdetaljer för bilar. Under examensarbetet har jag studerat den metodik som används av konstruktörerna när de arbetar i NX och utför trimning av formdelar och spegling av presshärdningsverktyg. Idag sker det här arbetet manuellt vilket leder till många steg som måste utföras av konstruktörerna för varje trimning eller spegling. Det leder till att arbetet blir både tidskrävande och att risken för fel ökar eftersom det inte finns något enkelt sätt för konstruktörerna att kontrollera att trimning har skett med rätt formyta eller att alla namn på de speglade delarna stämmer. Arbetet resulterade i två program som konstruktörerna kan använda i NX för att underlätta deras arbete. Båda programmen är begränsade så att all inmatning av data sker i första dialog rutan som kommer upp då programmet startas. Genom att indata samlas in under ett steg minskas risken för fel och gör arbetsprocessen lättare för konstruktörerna. Utvecklingen av båda programmen har genomförts med hjälp av Visual Studio från Microsofts som är deras eget IDE, Integrated Development Environment. Enligt de standarder som Hardtech lagt fram så skall Visual basic användas som programmeringsspråk. För att de färdiga programmen ska kunna kommunicera med NX har jag använt mig av NX API till båda lösningarna. För att kontrollera resultaten i projektet och kontrollera så att programmen har fått rätt funktioner har konstruktörerna fått prova på att använda båda programmen i verkliga projekt och kommit med feedback. / This master thesis has been conducted at Gestamp Hardtech in Luleå at their Tooling department. Today the designers uses 3D CAD such as I-DEAS and NX to develop and build press-hardening tools that are used in the manufacturing of safety details for the automotive industry. This thesis discuss the methods that are used today by the designers to shape the forming dies and the mirroring of a presshardeningtool and how this work can be improved. Today, both the shaping of the forming dies and the mirroring of a presshardningtool are done manually.  This work is done in many steps by the designers  and the designers can easily make errors especially since there is no easy way for the designers to check if the forming dies have the correct geometry. The manual process is therefore very error prone and improvements are very much needed.  As part of this work, two programs have been developed for NX for the designers to use in order to reduce the risk for errors during the process of shaping the forming dies and mirroring the presshardningen tool. A lot of focus has been spent on making the programs as easy to use as possible e.g. collecting all input data at the start of the program and designed the product as close as possible to a One Button Interface. Both of the programs have been developed with Microsofts own IDE, Visual Studio, and the code is structured according to the guidelines that was given by Hardtech. The programing language used for both programs are VB.NET. To make sure that the communication between NX and the program NX own API is used. The validation process for both solutions have been in the form of interviews and discussions with the tool designers. Through the entire project, the tool designers have tested and validated the programs in both test environments and real projects.
286

Discussion on Fifty Years of Classification and Regression Trees

Rusch, Thomas, Zeileis, Achim 12 1900 (has links) (PDF)
In this discussion paper, we argue that the literature on tree algorithms is very fragmented. We identify possible causes and discuss good and bad sides of this situation. Among the latter is the lack of free open-source implementations for many algorithms. We argue that if the community adopts a standard of creating and sharing free open-source implementations for their developed algorithms and creates easy access to these programs the bad sides of the fragmentation will be actively combated and will benefit the whole scientific community. (authors' abstract)
287

Separation Properties

Garvin, Billy Ray 12 1900 (has links)
The problem with which this paper is concerned is that of investigating a class of topological properties commonly called separation properties. A topological space which satisfies only the definition may be very limited in open sets. By use of the separation properties, specific families of open sets can be guaranteed.
288

Matador

Patino, Julio 05 1900 (has links)
Matador is an opera scored for orchestra, mixed chorus and soloists (mezzosoprano, 3 tenors, 2 baritones). The work is in one act divided into two main sections. Each of these sections is divided into subsections. The libretto is aphoristic in nature and dictates the form of each of these subsections. The division into two parts also serves as a means to evoke a sense of hopelessness of emotions in the first and a transforming disposition that culminates in a jubilant song in the second.
289

An Evaluation of the Open Airways for Schools Program

Thurber, James January 2007 (has links)
Class of 2007 Abstract / Objectives: Study objective: This study assessed the impact of an Open Airways for Schools Program for children with asthma that is delivered in their school by trained asthma volunteers sponsored by the local American Lung Association. Methods: Design: Retrospective. Setting: Eight elementary schools located throughout Tucson, Arizona. Participants: A total of 77 pre and post questionnaires for children in grades 3 to 5 with asthma, 30 pre and post questionnaires for parents, and 6 demographic questionnaires for school nurses. Measurements and results: Data collection involved obtaining pre and post questionnaires from the sponsoring agency measuring outcomes in knowledge of when and how much medication to take, triggers of asthma, steps to take upon wheezing, and social aspects such as the ability to talk with an adult or teacher when having problems. The dependent variables for the pre and post parent questionnaires include unscheduled visits to providers, and whether the child knows how much medication to take. Paired t test was used to determine whether differences existed between pre and post child and parent questionnaires. Nurse questionnaires were analyzed and reported to see the change in nurse visits. The results are reported as the mean post/pre ± SD. The child questionnaire data for outcomes in knowledge include: when to take medicine (0.14+/-0.35 vs 0.34+/-0.61;p=0.015), how much medicine to take upon wheeze/cough (0.38+/- 0.71 vs 0.67+/-0.80;p=0.003), identifying home triggers (0.36+/-0.68 vs 0.58+/-0.80;p=0.051), identifying school triggers (0.53+/-0.75 vs 0.70+/-0.80;p=0.228), and steps to take upon wheezing (0.21+/-0.48 vs 0.46+/-0.74;p=0.018). Social aspects data include: ability to talk to adult about asthma (0.17+/-0.45 vs 0.29+/-0.58;p=0.159), talk to teacher about asthma (0.28+/-0.57 vs 0.30+/-0.67;p=0.858), and talk to teacher about taking things out of classroom that make them wheeze (0.43+/-0.17 vs 0.77+-0.85;p=0.19). The parent questionnaire data include: unscheduled provider visits (2.83+/- 4.01 vs 3.61+/-7.15;p=0.508) and quantity of medicine to take with incomplete data. The nurse questionnaire showed a mean number of visits at 92.5+/-64.09. Results: See above Conclusions: Conclusion: Providing an asthma education program to children in school can significantly increase outcomes in knowledge of when and how much medicine to take upon wheezing, and the steps to follow when wheezing occurs. Additionally, areas to focus on in the program include identification of triggers at home and school, as well as the ability to talk with an adult or teacher regarding asthma.
290

Study of Facebook’s application architecture

Sundar, Nataraj January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Xinming (Simon) Ou / Facebook is a social networking service launched in February of 2004, currently having 600 million active users. Users can create a personal profile, add other friends, and exchange messages and notifications when they change their profile. Facebook has the highest usage among all social networks worldwide. It's most valuable asset is access to the personal data of all its users, making the security of such data a primary concern. User's data can be accessed by Facebook and third parties using Applications(Applications are web applications that are loaded in the context of Facebook. Building an application on Facebook will allow integration with many aspects like the user's profile information, news feed, notifications etc). "On profile" advertisement in Facebook is a classic example of how Facebook tailors the advertisements a user can see, based on the information in his profile. Having prioritzed user friendlines and ease of use of the Applications over the security of the user's data, serious questions about privacy are raised. We provide here an in-depth view of the Facebook's Application Authetication and Authorization architecture. We have included what, in our opinion, are the positives and negetives and suggested improvements. This document takes on the role of the User, the Application and Facebook server at appropriate points.

Page generated in 0.0667 seconds