• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 181
  • 143
  • 142
  • 141
  • 95
  • 33
  • 28
  • 16
  • 14
  • 12
  • 12
  • 12
  • 9
  • 9
  • Tagged with
  • 1602
  • 348
  • 298
  • 253
  • 249
  • 233
  • 227
  • 218
  • 209
  • 176
  • 159
  • 143
  • 138
  • 126
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Advances in Document Layout Analysis

Bosch Campos, Vicente 05 March 2020 (has links)
[EN] Handwritten Text Segmentation (HTS) is a task within the Document Layout Analysis field that aims to detect and extract the different page regions of interest found in handwritten documents. HTS remains an active topic, that has gained importance with the years, due to the increasing demand to provide textual access to the myriads of handwritten document collections held by archives and libraries. This thesis considers HTS as a task that must be tackled in two specialized phases: detection and extraction. We see the detection phase fundamentally as a recognition problem that yields the vertical positions of each region of interest as a by-product. The extraction phase consists in calculating the best contour coordinates of the region using the position information provided by the detection phase. Our proposed detection approach allows us to attack both higher level regions: paragraphs, diagrams, etc., and lower level regions like text lines. In the case of text line detection we model the problem to ensure that the system's yielded vertical position approximates the fictitious line that connects the lower part of the grapheme bodies in a text line, commonly known as the baseline. One of the main contributions of this thesis, is that the proposed modelling approach allows us to include prior information regarding the layout of the documents being processed. This is performed via a Vertical Layout Model (VLM). We develop a Hidden Markov Model (HMM) based framework to tackle both region detection and classification as an integrated task and study the performance and ease of use of the proposed approach in many corpora. We review the modelling simplicity of our approach to process regions at different levels of information: text lines, paragraphs, titles, etc. We study the impact of adding deterministic and/or probabilistic prior information and restrictions via the VLM that our approach provides. Having a separate phase that accurately yields the detection position (base- lines in the case of text lines) of each region greatly simplifies the problem that must be tackled during the extraction phase. In this thesis we propose to use a distance map that takes into consideration the grey-scale information in the image. This allows us to yield extraction frontiers which are equidistant to the adjacent text regions. We study how our approach escalates its accuracy proportionally to the quality of the provided detection vertical position. Our extraction approach gives near perfect results when human reviewed baselines are provided. / [ES] La Segmentación de Texto Manuscrito (STM) es una tarea dentro del campo de investigación de Análisis de Estructura de Documentos (AED) que tiene como objetivo detectar y extraer las diferentes regiones de interés de las páginas que se encuentran en documentos manuscritos. La STM es un tema de investigación activo que ha ganado importancia con los años debido a la creciente demanda de proporcionar acceso textual a las miles de colecciones de documentos manuscritos que se conservan en archivos y bibliotecas. Esta tesis entiende la STM como una tarea que debe ser abordada en dos fases especializadas: detección y extracción. Consideramos que la fase de detección es, fundamentalmente, un problema de clasificación cuyo subproducto son las posiciones verticales de cada región de interés. Por su parte, la fase de extracción consiste en calcular las mejores coordenadas de contorno de la región utilizando la información de posición proporcionada por la fase de detección. Nuestro enfoque de detección nos permite atacar tanto regiones de alto nivel (párrafos, diagramas¿) como regiones de nivel bajo (líneas de texto principalmente). En el caso de la detección de líneas de texto, modelamos el problema para asegurar que la posición vertical estimada por el sistema se aproxime a la línea ficticia que conecta la parte inferior de los cuerpos de los grafemas en una línea de texto, comúnmente conocida como línea base. Una de las principales aportaciones de esta tesis es que el enfoque de modelización propuesto nos permite incluir información conocida a priori sobre la disposición de los documentos que se están procesando. Esto se realiza mediante un Modelo de Estructura Vertical (MEV). Desarrollamos un marco de trabajo basado en los Modelos Ocultos de Markov (MOM) para abordar tanto la detección de regiones como su clasificación de forma integrada, así como para estudiar el rendimiento y la facilidad de uso del enfoque propuesto en numerosos corpus. Así mismo, revisamos la simplicidad del modelado de nuestro enfoque para procesar regiones en diferentes niveles de información: líneas de texto, párrafos, títulos, etc. Finalmente, estudiamos el impacto de añadir información y restricciones previas deterministas o probabilistas a través de el MEV propuesto que nuestro enfoque proporciona. Disponer de un método independiente que obtiene con precisión la posición de cada región detectada (líneas base en el caso de las líneas de texto) simplifica enormemente el problema que debe abordarse durante la fase de extracción. En esta tesis proponemos utilizar un mapa de distancias que tiene en cuenta la información de escala de grises de la imagen. Esto nos permite obtener fronteras de extracción que son equidistantes a las regiones de texto adyacentes. Estudiamos como nuestro enfoque aumenta su precisión de manera proporcional a la calidad de la detección y descubrimos que da resultados casi perfectos cuando se le proporcionan líneas de base revisadas por humanos. / [CA] La Segmentació de Text Manuscrit (STM) és una tasca dins del camp d'investigació d'Anàlisi d'Estructura de Documents (AED) que té com a objectiu detectar I extraure les diferents regions d'interès de les pàgines que es troben en documents manuscrits. La STM és un tema d'investigació actiu que ha guanyat importància amb els anys a causa de la creixent demanda per proporcionar accés textual als milers de col·leccions de documents manuscrits que es conserven en arxius i biblioteques. Aquesta tesi entén la STM com una tasca que ha de ser abordada en dues fases especialitzades: detecció i extracció. Considerem que la fase de detecció és, fonamentalment, un problema de classificació el subproducte de la qual són les posicions verticals de cada regió d'interès. Per la seva part, la fase d'extracció consisteix a calcular les millors coordenades de contorn de la regió utilitzant la informació de posició proporcionada per la fase de detecció. El nostre enfocament de detecció ens permet atacar tant regions d'alt nivell (paràgrafs, diagrames ...) com regions de nivell baix (línies de text principalment). En el cas de la detecció de línies de text, modelem el problema per a assegurar que la posició vertical estimada pel sistema s'aproximi a la línia fictícia que connecta la part inferior dels cossos dels grafemes en una línia de text, comunament coneguda com a línia base. Una de les principals aportacions d'aquesta tesi és que l'enfocament de modelització proposat ens permet incloure informació coneguda a priori sobre la disposició dels documents que s'estan processant. Això es realitza mitjançant un Model d'Estructura Vertical (MEV). Desenvolupem un marc de treball basat en els Models Ocults de Markov (MOM) per a abordar tant la detecció de regions com la seva classificació de forma integrada, així com per a estudiar el rendiment i la facilitat d'ús de l'enfocament proposat en nombrosos corpus. Així mateix, revisem la simplicitat del modelatge del nostre enfocament per a processar regions en diferents nivells d'informació: línies de text, paràgrafs, títols, etc. Finalment, estudiem l'impacte d'afegir informació i restriccions prèvies deterministes o probabilistes a través del MEV que el nostre mètode proporciona. Disposar d'un mètode independent que obté amb precisió la posició de cada regió detectada (línies base en el cas de les línies de text) simplifica enormement el problema que ha d'abordar-se durant la fase d'extracció. En aquesta tesi proposem utilitzar un mapa de distàncies que té en compte la informació d'escala de grisos de la imatge. Això ens permet obtenir fronteres d'extracció que són equidistants de les regions de text adjacents. Estudiem com el nostre enfocament augmenta la seva precisió de manera proporcional a la qualitat de la detecció i descobrim que dona resultats quasi perfectes quan se li proporcionen línies de base revisades per humans. / Bosch Campos, V. (2020). Advances in Document Layout Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138397
572

Neural Networks for Document Image and Text Processing

Pastor Pellicer, Joan 03 November 2017 (has links)
Nowadays, the main libraries and document archives are investing a considerable effort on digitizing their collections. Indeed, most of them are scanning the documents and publishing the resulting images without their corresponding transcriptions. This seriously limits the document exploitation possibilities. When the transcription is necessary, it is manually performed by human experts, which is a very expensive and error-prone task. Obtaining transcriptions to the level of required quality demands the intervention of human experts to review and correct the resulting output of the recognition engines. To this end, it is extremely useful to provide interactive tools to obtain and edit the transcription. Although text recognition is the final goal, several previous steps (known as preprocessing) are necessary in order to get a fine transcription from a digitized image. Document cleaning, enhancement, and binarization (if they are needed) are the first stages of the recognition pipeline. Historical Handwritten Documents, in addition, show several degradations, stains, ink-trough and other artifacts. Therefore, more sophisticated and elaborate methods are required when dealing with these kind of documents, even expert supervision in some cases is needed. Once images have been cleaned, main zones of the image have to be detected: those that contain text and other parts such as images, decorations, versal letters. Moreover, the relations among them and the final text have to be detected. Those preprocessing steps are critical for the final performance of the system since an error at this point will be propagated during the rest of the transcription process. The ultimate goal of the Document Image Analysis pipeline is to receive the transcription of the text (Optical Character Recognition and Handwritten Text Recognition). During this thesis we aimed to improve the main stages of the recognition pipeline, from the scanned documents as input to the final transcription. We focused our effort on applying Neural Networks and deep learning techniques directly on the document images to extract suitable features that will be used by the different tasks dealt during the following work: Image Cleaning and Enhancement (Document Image Binarization), Layout Extraction, Text Line Extraction, Text Line Normalization and finally decoding (or text line recognition). As one can see, the following work focuses on small improvements through the several Document Image Analysis stages, but also deals with some of the real challenges: historical manuscripts and documents without clear layouts or very degraded documents. Neural Networks are a central topic for the whole work collected in this document. Different convolutional models have been applied for document image cleaning and enhancement. Connectionist models have been used, as well, for text line extraction: first, for detecting interest points and combining them in text segments and, finally, extracting the lines by means of aggregation techniques; and second, for pixel labeling to extract the main body area of the text and then the limits of the lines. For text line preprocessing, i.e., to normalize the text lines before recognizing them, similar models have been used to detect the main body area and then to height-normalize the images giving more importance to the central area of the text. Finally, Convolutional Neural Networks and deep multilayer perceptrons have been combined with hidden Markov models to improve our transcription engine significantly. The suitability of all these approaches has been tested with different corpora for any of the stages dealt, giving competitive results for most of the methodologies presented. / Hoy en día, las principales librerías y archivos está invirtiendo un esfuerzo considerable en la digitalización de sus colecciones. De hecho, la mayoría están escaneando estos documentos y publicando únicamente las imágenes sin transcripciones, limitando seriamente la posibilidad de explotar estos documentos. Cuando la transcripción es necesaria, esta se realiza normalmente por expertos de forma manual, lo cual es una tarea costosa y propensa a errores. Si se utilizan sistemas de reconocimiento automático se necesita la intervención de expertos humanos para revisar y corregir la salida de estos motores de reconocimiento. Por ello, es extremadamente útil para proporcionar herramientas interactivas con el fin de generar y corregir la transcripciones. Aunque el reconocimiento de texto es el objetivo final del Análisis de Documentos, varios pasos previos (preprocesamiento) son necesarios para conseguir una buena transcripción a partir de una imagen digitalizada. La limpieza, mejora y binarización de las imágenes son las primeras etapas del proceso de reconocimiento. Además, los manuscritos históricos tienen una mayor dificultad en el preprocesamiento, puesto que pueden mostrar varios tipos de degradaciones, manchas, tinta a través del papel y demás dificultades. Por lo tanto, este tipo de documentos requiere métodos de preprocesamiento más sofisticados. En algunos casos, incluso, se precisa de la supervisión de expertos para garantizar buenos resultados en esta etapa. Una vez que las imágenes han sido limpiadas, las diferentes zonas de la imagen deben de ser localizadas: texto, gráficos, dibujos, decoraciones, letras versales, etc. Por otra parte, también es importante conocer las relaciones entre estas entidades. Estas etapas del pre-procesamiento son críticas para el rendimiento final del sistema, ya que los errores cometidos en aquí se propagarán al resto del proceso de transcripción. El objetivo principal del trabajo presentado en este documento es mejorar las principales etapas del proceso de reconocimiento completo: desde las imágenes escaneadas hasta la transcripción final. Nuestros esfuerzos se centran en aplicar técnicas de Redes Neuronales (ANNs) y aprendizaje profundo directamente sobre las imágenes de los documentos, con la intención de extraer características adecuadas para las diferentes tareas: Limpieza y Mejora de Documentos, Extracción de Líneas, Normalización de Líneas de Texto y, finalmente, transcripción del texto. Como se puede apreciar, el trabajo se centra en pequeñas mejoras en diferentes etapas del Análisis y Procesamiento de Documentos, pero también trata de abordar tareas más complejas: manuscritos históricos, o documentos que presentan degradaciones. Las ANNs y el aprendizaje profundo son uno de los temas centrales de esta tesis. Diferentes modelos neuronales convolucionales se han desarrollado para la limpieza y mejora de imágenes de documentos. También se han utilizado modelos conexionistas para la extracción de líneas: primero, para detectar puntos de interés y segmentos de texto y, agregarlos para extraer las líneas del documento; y en segundo lugar, etiquetando directamente los píxeles de la imagen para extraer la zona central del texto y así definir los límites de las líneas. Para el preproceso de las líneas de texto, es decir, la normalización del texto antes del reconocimiento final, se han utilizado modelos similares a los mencionados para detectar la zona central del texto. Las imagenes se rescalan a una altura fija dando más importancia a esta zona central. Por último, en cuanto a reconocimiento de escritura manuscrita, se han combinado técnicas de ANNs y aprendizaje profundo con Modelos Ocultos de Markov, mejorando significativamente los resultados obtenidos previamente por nuestro motor de reconocimiento. La idoneidad de todos estos enfoques han sido testeados con diferentes corpus en cada una de las tareas tratadas., obtenie / Avui en dia, les principals llibreries i arxius històrics estan invertint un esforç considerable en la digitalització de les seues col·leccions de documents. De fet, la majoria estan escanejant aquests documents i publicant únicament les imatges sense les seues transcripcions, fet que limita seriosament la possibilitat d'explotació d'aquests documents. Quan la transcripció del text és necessària, normalment aquesta és realitzada per experts de forma manual, la qual cosa és una tasca costosa i pot provocar errors. Si s'utilitzen sistemes de reconeixement automàtic es necessita la intervenció d'experts humans per a revisar i corregir l'eixida d'aquests motors de reconeixement. Per aquest motiu, és extremadament útil proporcionar eines interactives amb la finalitat de generar i corregir les transcripcions generades pels motors de reconeixement. Tot i que el reconeixement del text és l'objectiu final de l'Anàlisi de Documents, diversos passos previs (coneguts com preprocessament) són necessaris per a l'obtenció de transcripcions acurades a partir d'imatges digitalitzades. La neteja, millora i binarització de les imatges (si calen) són les primeres etapes prèvies al reconeixement. A més a més, els manuscrits històrics presenten una major dificultat d'analisi i preprocessament, perquè poden mostrar diversos tipus de degradacions, taques, tinta a través del paper i altres peculiaritats. Per tant, aquest tipus de documents requereixen mètodes de preprocessament més sofisticats. En alguns casos, fins i tot, es precisa de la supervisió d'experts per a garantir bons resultats en aquesta etapa. Una vegada que les imatges han sigut netejades, les diferents zones de la imatge han de ser localitzades: text, gràfics, dibuixos, decoracions, versals, etc. D'altra banda, també és important conéixer les relacions entre aquestes entitats i el text que contenen. Aquestes etapes del preprocessament són crítiques per al rendiment final del sistema, ja que els errors comesos en aquest moment es propagaran a la resta del procés de transcripció. L'objectiu principal del treball que estem presentant és millorar les principals etapes del procés de reconeixement, és a dir, des de les imatges escanejades fins a l'obtenció final de la transcripció del text. Els nostres esforços se centren en aplicar tècniques de Xarxes Neuronals (ANNs) i aprenentatge profund directament sobre les imatges de documents, amb la intenció d'extraure característiques adequades per a les diferents tasques analitzades: neteja i millora de documents, extracció de línies, normalització de línies de text i, finalment, transcripció. Com es pot apreciar, el treball realitzat aplica xicotetes millores en diferents etapes de l'Anàlisi de Documents, però també tracta d'abordar tasques més complexes: manuscrits històrics, o documents que presenten degradacions. Les ANNs i l'aprenentatge profund són un dels temes centrals d'aquesta tesi. Diferents models neuronals convolucionals s'han desenvolupat per a la neteja i millora de les dels documents. També s'han utilitzat models connexionistes per a la tasca d'extracció de línies: primer, per a detectar punts d'interés i segments de text i, agregar-los per a extraure les línies del document; i en segon lloc, etiquetant directament els pixels de la imatge per a extraure la zona central del text i així definir els límits de les línies. Per al preprocés de les línies de text, és a dir, la normalització del text abans del reconeixement final, s'han utilitzat models similars als utilitzats per a l'extracció de línies. Finalment, quant al reconeixement d'escriptura manuscrita, s'han combinat tècniques de ANNs i aprenentatge profund amb Models Ocults de Markov, que han millorat significativament els resultats obtinguts prèviament pel nostre motor de reconeixement. La idoneïtat de tots aquests enfocaments han sigut testejats amb diferents corpus en cadascuna de les tasques tractad / Pastor Pellicer, J. (2017). Neural Networks for Document Image and Text Processing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90443
573

Entity extraction, animal disease-related event recognition and classification from web

Volkova, Svitlana January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / Global epidemic surveillance is an essential task for national biosecurity management and bioterrorism prevention. The main goal is to protect the public from major health threads. To perform this task effectively one requires reliable, timely and accurate medical information from a wide range of sources. Towards this goal, we present a framework for epidemiological analytics that can be used to extract and visualize infectious disease outbreaks from the variety of unstructured web sources automatically. More precisely, in this thesis, we consider several research tasks including document relevance classification, entity extraction and animal disease-related event recognition in the veterinary epidemiology domain. First, we crawl web sources and classify collected documents by topical relevance using supervised learning algorithms. Next, we propose a novel approach for automated ontology construction in the veterinary medicine domain. Our approach is based on semantic relationship discovery using syntactic patterns. We then apply our automatically-constructed ontology for the domain-specific entity extraction task. Moreover, we compare our ontology-based entity extraction results with an alternative sequence labeling approach. We introduce a sequence labeling method for the entity tagging that relies on syntactic feature extraction using a sliding window. Finally, we present our novel sentence-based event recognition approach that includes three main steps: entity extraction of animal diseases, species, locations, dates and the confirmation status n-grams; event-related sentence classification into two categories - suspected or confirmed; automated event tuple generation and aggregation. We show that our document relevance classification results as well as entity extraction and disease-related event recognition results are significantly better compared to the results reported by other animal disease surveillance systems.
574

Towards a comprehensive functional layered architecture for the Semantic Web

Gerber, Aurona J. 30 November 2006 (has links)
The Semantic Web, as the foreseen successor of the current Web, is envisioned to be a semantically enriched information space usable by machines or agents that perform sophisticated tasks on behalf of their users. The realisation of the Semantic Web prescribe the development of a comprehensive and functional layered architecture for the increasingly semantically expressive languages that it comprises of. A functional architecture is a model specified at an appropriate level of abstraction identifying system components based on required system functionality, whilst a comprehensive architecture is an architecture founded on established design principles within Software Engineering. Within this study, an argument is formulated for the development of a comprehensive and functional layered architecture through the development of a Semantic Web status model, the extraction of the function of established Semantic Web technologies, as well as the development of an evaluation mechanism for layered architectures compiled from design principles as well as fundamental features of layered architectures. In addition, an initial version of such a comprehensive and functional layered architecture for the Semantic Web is constructed based on the building blocks described above, and this architecture is applied to several scenarios to establish the usefulness thereof. In conclusion, based on the evidence collected as result of the research in this study, it is possible to justify the development of an architectural model, or more specifically, a comprehensive and functional layered architecture for the languages of the Semantic Web. / Computing / PHD (Computer Science)
575

The use of electronic evidence in forensic investigation

Ngomane, Amanda Refiloe 06 1900 (has links)
For millions of people worldwide the use of computers has become a central part of life. Criminals are exploiting these technological advances for illegal activities. This growth of technology has therefore produced a completely new source of evidence referred to as ‘electronic evidence’. In light of this the researcher focused on the collection of electronic evidence and its admissibility at trial. The study intends to assist and give guidance to investigators to collect electronic evidence properly and legally and ensure that it is admitted as evidence in court. Electronic evidence is fragile and volatile by nature and therefore requires the investigator always to exercise reasonable care during its collection, preservation and analysis to protect its identity and integrity. The legal requirements that the collected electronic evidence must satisfy for it to be admissible in court are relevance, reliability, and authenticity. When presenting the evidence in court the investigator should always keep in mind that the judges are not specialists in the computing environment and that therefore the investigator must be able to explain how the chain of custody was maintained during the collection, preservation and analysis of electronic evidence. The complex technology behind electronic evidence must be clearly explained so that the court is able to understand the evidence in a way that an ordinary person or those who have never used a computer before can. This is because the court always relies on the expertise of the investigator to understand electronic evidence and make a ruling on matters related to it. / Police Practice / M. Tech. (Forensic Investigation)
576

Expression numérique par l'intermédiaire de gabarits CAO des documents de contrôle d'interface en aéronautique

Jouffroy, Denis January 2011 (has links)
La gestion des interfaces est un élément clé dans la conception de systèmes complexes dans le cadre d'une ingénierie collaborative. Un document de contrôle d'interface est le principal élément disponible pour prévenir les erreurs d'assemblage, les problèmes d'interprétation et de coordination des différents sous-systèmes. Par conséquent, dans un environnement d'ingénierie collaborative, la question qui se pose est : Comment faciliter la gestion des interfaces et améliorer leurs documents de contrôle dans le cadre d'une ingénierie collaborative ? Ce projet de recherche propose une expression numérique des contraintes géométriques et fonctionnelles issues des documents de contrôle d'interface dans un contexte d'ingénierie collaborative. Dans un premier temps, il est nécessaire de construire des liens entre les documents de contrôle d'interface et les modèles géométriques. Ensuite on définira un format numérique unifié pour les documents de contrôle d'interface (DCI).La représentation des contraintes de façon numérique s'effectuera par la création d'une typologie de gabarit qui s'intégrera dans le format numérique unifié. Il faudra ensuite se munir d'une méthodologie permettant d'exploiter la représentation numérique et d'établir des mécanismes de gestion et de suivi du contrôle des contraintes issues du document de contrôle d'interface. L'ensemble de ces concepts sera testé sur un cas concret. Ce dernier correspond à l'assemblage d'un moteur issu de l'entreprise Prat & Whitney sur un mât de réacteur pour un avion régional issue [i.e. issu] de l'entreprise Bombardier.La gestion des interfaces et les documents de contrôle d'interface sont des éléments importants dans l'objectif d'assurer une ingénierie collaborative efficace dans un contexte de gestion du cycle de vie du produit. Un format numérique unifié est proposé, s'appuyant sur une typologie de gabarit, pour systématiquement documenter les spécifications issues d'un DCI. Avec ce projet de maîtrise, la représentation numérique des documents de contrôle d'interface devient possible. Les utilisateurs de cette représentation posséderont un certain nombre d'outils et de méthodes qui leurs [i.e. leur] permettront de générer les gabarits et/ou les volumes les aidants [i.e. aidant] dans la prise de décision sur le respect des contraintes entre deux produits. L'utilisation d'un tel DCI entraînera un gain de temps. L'ingénieur trouvera toutes les informations dans un unique document et si d'autres documents sont nécessaires, l'emplacement de ces derniers sera spécifié dans le DCI, voire même dans certains cas un lien dynamique aura été créé. Le contenu des documents de contrôle d'interface sera clarifié, tout en permettant une prise de décision plus rapide concernant le respect des contraintes. Enfin, la communication entres [i.e. entre] partenaires sera facilitée par le DCI. En effet, le document contiendra toutes les informations dont ils auront besoin, donc un seul échange sera suffisant pour permettre de traiter un assemblage constitué de deux produits.
577

"Det går inte att lita på föräldrarna" : Hur skildras omsorgsbristande föräldrar i LVU-rättsfall? / "You can't trust the parents" : How are parents, that neglect their children, represented in LVU-legal cases?

Johansson, Jeanette, Karlsson, Frida January 2010 (has links)
The aim of this study was to, with a social constructivistic approach, examine how parents,regarded as neglecting their children, are depicted in 12 LVU-legal cases from the Supreme Administrative Court in Sweden. By using a document analysis influenced by discourse analytical tools, we found that there are repeated descriptions of the parents, which constructs an image of parents as shortcoming in the care of their children. The categories lack of emotions, mental disorder, substance abuse and physical maltreatment were the main reasons for child neglect that the parents were described from. Attitudes towards authority, aggression and immaturity, lack of insight, and deficiencies in the home were repeated in the description of the parent. We further found that mothers were regarded as more responsible of the children compared to fathers. According to Goffman, describing certain groups of individuals with discredited words has a stigmatizing effect. It helps to reinforce what is considered normal or abnormal. Furthermore, we have found that the language of the LVU-legal cases, possess a power in the construction of parents who neglect their children, which was analyzed on the basis of Foucault's theory of power.
578

Under utgivning : den vetenskapliga utgivningens bibliografiska funktion

Dahlström, Mats January 2006 (has links)
The thesis investigates in what way the scholarly edition performs bibliographic functions as it manages and positions other documents. This is where the study differs from previous research on scholarly editing and bibliography. It aims to trace the boundary between scholarly editing and bibliography by comparing crucial objectives, problems and conflicts in each field. This is accomplished by identifying the argumentation, assumptions and conceptual frameworks that form the rationale for the fields, and subjecting them to qualitative critical and historical analysis. The main empirical material is editorial theory literature, with scholarly editions serving as illustrating examples. Key questions concern the way scholarly editors and bibliographers identify, define and reproduce their respective source material; the reasons for conflicts between editors’ varying expectations of the reproductive force in printed and digital editions; and the connections and demarcations between scholarly editing and bibliography and between scholarly editions and reference works such as bibliographies. Bibliographic and media theory form the basis for the theoretical framework, with additional input from book history, literary theory, genre studies and scholarly communication studies. The thesis suggests a distinction between the two activities of clustering and transposition, and the distortion the latter brings about. These concepts are employed to detect, group and explain activities and problems in scholarly editing and bibliography, who both manage sets of documents by clustering them to one another and transposing their contents by producing new documents. There is a noticeable division of labour between the two tasks, and they also correspond to different types of editions. The study also ties the dominant editorial strategies and edition types to respective bibliographic foci, and argues that central conflict areas are primarily accentuated and only secondarily introduced with digital editing. An idealistic strand treats editing as unbiased delivery of disambiguable and reproducible content, while to a hermeneutical strand the edition is an argumentative and content constraining filter, its editor being a kind of biased author. In a third strand, editions are content circulating ecosystems with a division of labour between collaborating media types. In particular the view of editions as constitutive arguments is related to analogue observations in LIS and genre and scholarly communication studies. On the one hand, editing is supposed to be a dynamic research area, ready to respond to new findings and scholarly ideals. On the other, several arenas demand the edition to serve as a conservative force, static and confirmatory. The potential of digital media points to a distinction between edition and archive, where the former but not the latter explicitly takes an interpretative stand. Digital editing also boosts the idealistic strand by the seeming promise to separate facts from interpretation and to enhance maximum exhaustiveness and reproductivity. Although the thesis identifies many commonalities between editions and reference works and the way these are structured, there is a crucial difference. The edition is simultaneously a work’s reference and referent. Bibliographies and reference works cannot make that claim. / <p>Akademisk avhandling som med tillstånd av samhällsvetenskapliga fakulteten vid</p><p>Göteborgs universitet för vinnande av filosofie doktorsexamen</p><p>framläggs till offentlig granskning kl. 13.15 lördagen den 9 december 2006</p><p>i Stora Hörsalen (C 203), Högskolan i Borås, Allégatan 1, Borås.</p>
579

(Re)creations of scholarly journals : document and information architecture in open access journals

Francke, Helena January 2008 (has links)
This dissertation contributes to the research-based understanding of the scholarly journal as an artefact by studying the document structures of open access e-journals published by editors or small, independent publishers. The study focuses on the properties of the documents, taking its point of departure in a sociotechnical document perspective. This perspective is ope rationalised through a number of aspects from document architecture and information architecture: logical structures, layout structures, content structures, fi le structures, organisation systems, navigation, and labelling. The data collection took the form of a survey of 265 journal web sites, randomly selected, and qualitative readings of four journal web sites. The results of the study are presented based on choice of format and modes of representation; visual design; markup; metadata and paratexts; and document organisation and navigation. Two approaches were used to analyse the study fi ndings. To begin with, the remediation strategies of the scholarly journals were discussed; how does this document type, which has a long tradition in the print medium, take possession of the web medium? The ties to the print journal are still strong, and a majority of the journals treat the web medium mainly as a way to distribute journal articles to be printed and read as hard-copies. Many journals do, however, take advantage of such features as hypertext and full-text searching, and some use the fl exibility of the web medium to provide their users with alternative views. A small number of e-journals also refashion the print journal by including modes of representation not possible in print, such as audio or video, to illustrate and support the arguments made in their articles. Furthermore, interactive features are used to increase communication between different groups, but this type of communicative situation has not yet become an integral part of the scholarly journal. An electronic document is often viewed as more fl exible, but also less constant, than documents on paper. This sometimes means that the e-only journal is seen as a less dependable source for scholarly publishing than print. A second analytical approach showed how the architectures are used to indicate aspects that can enhance a journal’s chances of being regard ed as a credible source: a cognitive authority. Four strategies have been identifi ed as used by the journals: they employ architectural features to draw on the cognitive authority of people or organisations associated with the journal, on the cognitive authority of other documents, and on the professional use of the conventions of print journals and web sites respectively. By considering how document properties are used to indicate cognitive authority potential, a better understanding of how texts function as cognitive authorities is achieved. / <p>Akademisk avhandling som med tillstånd av samhällsvetenskapliga fakulteten</p><p>vid Göteborgs universitet för vinnande av doktorsexamen framläggs till</p><p>offentlig granskning kl. 13.15 måndagen den 28 april i hörsalen Sappören,</p><p>Göteborgs universitet, Sprängkullsgatan 25.</p>
580

Developing standards for household latrines in Rwanda

Medland, Louise S. January 2014 (has links)
The issue of standards for household latrines is complex because discussions related to standards for latrines in literature from the water, sanitation and hygiene (WASH) sector tend to focus on the negative aspects of standards and highlights cases where the miss-application of standards in the past has caused problems. However, despite concerns about the constraints that standards can seemingly impose, there is an acknowledgement that standards can play a more positive role in supporting efforts to increase access to household latrines. The World Health Organisation has long established and widely recognised standards for water supply quality and quantity but there are no equivalent standards for sanitation services and there is currently no guidance that deals with the topic of standards for household latrines. Household latrines are a small component of the wider sanitation system in a country and by considering how standards for household latrines operate within this wider sanitation system the aim of this research is to understand what influences standards can have on household latrines and explore how the negative perceptions about standards and latrine building can be overcome. The development of guidance on how to develop well written standards is the core focus of this research. This research explores the factors that can influence the development and use of a standard for household latrines in Rwanda using three data collection methods. Document analysis using 66 documents, including policies and strategies, design manuals and training guides from 17 countries throughout Sub-Saharan Africa was used in conjunction with the Delphi Method involving an expert panel of 27 from Rwanda and 38 semi-structured interviews. The research concludes that perceptions about standards for household latrines are fragmented and confused with little consensus in Rwanda on what need a standard should meet and what role it should play. The study has found that the need for a standard must be considered in the context of the wider sanitation system otherwise it can lead to duplication of efforts and increased confusion for all stakeholders. The study also found that there is an assumed link between standards and enforcement of standards through regulation and punishments which creates the negative perceptions about standards in Rwanda. However, despite this aversion to standards, there are still intentions to promote the standardisation of latrine technologies and designs, led by national government in Rwanda and in other Sub-Saharan African countries. The contribution to knowledge of this research includes a decision process presented at the end of the study which can be used by decision makers who are interested in developing a standard for household latrines. The decision process acts as a tool for outlining how a standard can operate within the national sanitation system. This understanding provides decision makers with the basis for continuing the debate on what a well written standard looks like in the national context and supports the development of a standard that is fit for purpose and provides a positive contribution to the sector.

Page generated in 0.0496 seconds