• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

High Throughput Automated Comparative Analysis of RNAs Using Isotope Labeling and LC-MS/MS

Li, Siwei 17 October 2014 (has links)
No description available.
12

Automated segmentation and analysis of layers and structures of human posterior eye

Zhang, Li 01 December 2015 (has links)
Optical coherence tomography (OCT) is becoming an increasingly important modality for the diagnosis and management of a variety of eye diseases, such as age-related macular degeneration (AMD), glaucoma, and diabetic macular edema (DME). Spectral domain OCT (SD-OCT), an advanced type of OCT, produces three dimensional high-resolution cross-sectional images and demonstrates delicate structure of the functional portion of posterior eye, including retina, choroid, and optic nerve head. As the clinical importance of OCT for retinal disease management and the need for quantitative and objective disease biomarkers grows, fully automated three-dimensional analysis of the retina has become desirable. Previously, our group has developed the Iowa Reference Algorithms (http://www.biomed-imaging.uiowa.edu/downloads/), a set of fully automated 3D segmentation algorithms for the analysis of retinal layer structures in subjects without retinal disease. This is the first method of segmenting and quantifying individual layers of the retina in three dimensions. However, in retinal disease, the normal architecture of the retina - specifically the outer retina - is disrupted. Fluid and deposits can accumulate, and normal tissue can be replaced by scar tissue. These abnormalities increase the irregularity of the retinal structure and make quantitative analysis in the image data extra challenging. In this work, we focus on the segmentation of the retina of patients with age-related macular degeneration, the most important cause of blindness and visual loss in the developed world. Though early and intermediate AMD results in some vision loss, the most devastating vision loss occurs in the two endstages of the disease, called geographic atrophy (GA) respectively choroidal neovascularization (CNV). In GA, because of pathological changes that are not fully understood, the retinal pigment epithelium disappears and photoreceptors lose this supporting tissue and degenerate. Second, in CNV, the growth of abnormal blood vessels originating from the choroidal vasculature causes fluid to enter the surrounding retina, causing disruption of the tissues and eventual visual loss. The severity and progress of early AMD is characterized by the formation of drusen and subretinal drusenoid deposits, structures containing photoreceptor metabolites - primarily lipofuscin - the more drusen the more severe the disease and the higher the risk of progressing to GAD or CNV. Thus, to improve the image guided management of AMD, we will study automated methods for segmenting and quantifying these intraretinal, subretinal and choroidal structures, including different types of abnormalities and layers, focusing on the outer retina. The major contributions of this thesis include: 1) developing an automated method of segmenting the choroid and quantifying the choroidal thickness in 3D OCT images; 2) developing an automated method of quantifying the presence of drusen in early and intermediate AMD; 3) developing an method of identifying the different ocular structures simultaneously; 4) studying the relationship among intraretinal, subretinal and choroidal structures.
13

Runtime Analysis of Malware

Iqbal, Muhammad Shahid, Sohail, Muhammad January 2011 (has links)
Context: Every day increasing number of malwares are spreading around the world and infecting not only end users but also large organizations. This results in massive security threat for private data and expensive computer resources. There is lot of research going on to cope up with this large amount of malicious software. Researchers and practitioners developed many new methods to deal with them. One of the most effective methods used to capture malicious software is dynamic malware analysis. Dynamic analysis methods used today are very time consuming and resource greedy. Normally it could take days or at least some hours to analyze a single instance of suspected software. This is not good enough especially if we look at amount of attacks occurring every day. Objective: To save time and expensive resources used to perform these analyses, AMA: an automated malware analysis system is developed to analyze large number of suspected software. Analysis of any software inside AMA, results in a detailed report of its behavior, which includes changes made to file system, registry, processes and network traffic consumed. Main focus of this study is to develop a model to automate the runtime analysis of software which provide detailed analysis report and evaluation of its effectiveness. Methods: A thorough background study is conducted to gain the knowledge about malicious software and their behavior. Further software analysis techniques are studied to come up with a model that will automate the runtime analysis of software. A prototype system is developed and quasi experiment performed on malicious and benign software to evaluate the accuracy of the newly developed system and generated reports are compared with Norman and Anubis. Results: Based on thorough background study an automated runtime analysis model is developed and quasi experiment performed using implemented prototype system on selected legitimate and benign software. The experiment results show AMA has captured more detailed software behavior then Norman and Anubis and it could be used to better classify software. Conclusions: We concluded that AMA could capture more detailed behavior of the software analyzed and it will give more accurate classification of the software. We also can see from experiment results that there is no concrete distinguishing factors between general behaviors of both types of software. However, by digging a bit deep into analysis report one could understand the intensions of the software. That means reports generated by AMA provide enough information about software behavior and can be used to draw correct conclusions. / +46 736 51 83 01
14

Specification and Automated Design-Time Analysis of the Business Process Human Resource Perspective

Resinas, Manuel, del-Río-Ortega, Adela, Ruiz-Cortés, Antonio, Cabanillas Macias, Cristina 03 April 2015 (has links) (PDF)
The human resource perspective of a business process is concerned with the relation between the activities of a process and the actors who take part in them. Unlike other process perspectives, such as control flow, for which many different types of analyses have been proposed, such as finding deadlocks, there is an important gap regarding the human resource perspective. Resource analysis in business processes has not been defined, and only a few analysis operations can be glimpsed in previous approaches. In this paper, we identify and formally define seven design-time analysis operations related to how resources are involved in process activities. Furthermore, we demonstrate that for a wide variety of resource-aware BP models, those analysis operations can be automated by leveraging Description Logic (DL) off-the-shelf reasoners. To this end, we rely on Resource Assignment Language (RAL), a domain-specific language that enables the definition of conditions to select the candidates to participate in a process activity. We provide a complete formal semantics for RAL based on DLs and extend it to address the operations, for which the control flow of the process must also be taken into consideration. A proof-of-concept implementation has been developed and integrated in a system called CRISTAL. As a result, we can give an automatic answer to different questions related to the management of resources in business processes at design time.
15

Automated Identification of Noun Clauses in Clinical Language Samples

Manning, Britney Richey 09 August 2009 (has links) (PDF)
The identification of complex grammatical structures including noun clauses is of clinical importance because differences in the use of these structures have been found between individuals with and without language impairment. In recent years, computer software has been used to assist in analyzing clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as noun clauses. The present study investigated the accuracy of new software, called Cx, in identifying finite wh- and that-noun clauses. Two sets of language samples were used. One set included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. The second set included 40 adults with mental retardation. Levels of agreement between computerized and manual analysis were similar for both sets of language samples; Kappa levels were high for wh-noun clauses and very low for that-noun clauses.
16

Towards Automating Structural Analysis of Complex RNA Molecules and Some Applications In Nanotechnology

Parlea, Lorena Georgeta 02 June 2015 (has links)
No description available.
17

Integritet av IT-forensiska verktyg för automatisk analys / Integrity of IT-forensic tools regarding automated analysis

Canovas Thorsell, Roberto January 2021 (has links)
IT-relaterad brottslighet ökar lavinartat och Polismyndigheten står inför nya utmaningar i att identifiera gärningsmän. Allt mer mjukvaror och tjänster blir automatiserade och det gäller även mjukvarorna som Polismyndigheten använder sig av. En av utmaningarna är den oerhörda mängd data som måste processas och analyseras i undersökningar och då förutsätts det att verktygen presenterar data med bibehållen integritet. Verktygen som används är nästan alltid tredjepartsmjukvara och då är det viktigt att rätt data plockas ut och att datan är korrekt. Denna studie har som mål att jämföra två mjukvaror i hur de identifierar och presenterar data. Studien görs i samverkan med Polismyndigheten vid Regionalt IT-brottscentrum Väst – Skövde och hoppas inbringa nya insikter och kunskaper i de verktyg som jämförelsen grundas på och med hjälp av kunskaperna kunna värdesätta integriteten hos verktygen. Resultatet som framträder i studien är att verktygen presenterar data med bibehållen integritet. / Cybercrime is on the rise in society and the Swedish Police is facing new challenges in identifying criminals. More tools and services are becoming automated, and this also applies to the tools that the Swedish Police uses. One of the challenges is the enormous amount of data that must be processed and analyzed during investigations. The tools used are always third-party programs and IT-forensics needs to rely on the organization that makes the software. This study aims to evaluate two different tools in how they identify and present artifacts. The study is conducted in collaboration with the Police Authority at the Regional IT Crime Center West - Skövde and hopes to bring new insights and knowledge into the tools on which the comparison is based on and with the help of the knowledge be able to value the integrity of the tools. The result that the study presents is that the tools are presenting data with preserved integrity.
18

Analysis Of Extended Feature Models With Constraint Programming

Karatas, Ahmet Serkan 01 June 2010 (has links) (PDF)
In this dissertation we lay the groundwork of automated analysis of extended feature models with constraint programming. Among different proposals, feature modeling has proven to be very effective for modeling and managing variability in Software Product Lines. However, industrial experiences showed that feature models often grow too large with hundreds of features and complex cross-tree relationships, which necessitates automated analysis support. To address this issue we present a mapping from extended feature models, which may include complex feature-feature, feature-attribute and attribute-attribute cross-tree relationships as well as global constraints, to constraint logic programming over finite domains. Then, we discuss the effects of including complex feature attribute relationships on various analysis operations defined on the feature models. As new types of variability emerge due to the inclusion of feature attributes in cross-tree relationships, we discuss the necessity of reformulation of some of the analysis operations and suggest a revised understanding for some other. We also propose new analysis operations arising due to the nature of the new variability introduced. Then we propose a transformation from extended feature models to basic/cardinality-based feature models that may be applied under certain circumstances and enables using SAT or BDD solvers in automated analysis of extended feature models. Finally, we discuss the role of the context information in feature modeling, and propose to use context information in staged configuration of feature-models.
19

Biomarker Discovery in Cutaneous Malignant Melanoma : A Study Based on Tissue Microarrays and Immunohistochemistry

Agnarsdóttir, Margrét January 2011 (has links)
The incidence of cutaneous malignant melanoma has increased dramatically in Caucasians the last few decades, an increase that is partly explained by altered sun exposure habits. For the individual patient, with a localized disease, the tumor thickness of the excised lesion is the most important prognostic factor. However, there is a need to identify characteristics that can place patients in certain risk groups. In this study, the protein expression of multiple proteins in malignant melanoma tumors was studied, with the aim of identifying potential new candidate biomarkers. Representative samples from melanoma tissues were assembled in a tissue microarray format and protein expression was detected using immunohistochemistry. Multiple cohorts were used and for a subset of proteins the expression was also analyzed in melanocytes in normal skin and in benign nevi. The immunohistochemical staining was evaluated manually and for part of the proteins also with an automated algorithm. The protein expression of STX7 was described for the first time in tumors of the melanocytic lineage. Stronger expression of STX7 and SOX10 was seen in superficial spreading melanomas compared with nodular malignant melanomas. An inverse relationship between STX7 expression and T-stage was seen and between SOX10 expression and T-stage and Ki-67, respectively. In a population-based cohort the expression of MITF was analyzed and found to be associated with prognosis. Twenty-one potential biomarkers were analyzed using bioinformatics tools and a protein signature was identified which had a prognostic value independent of T-stage. The protein driving this signature was RBM3, a protein not previously described in malignant melanoma. Other markers included in the signature were MITF, SOX10 and Ki-67. In conclusion, the protein expression of numerous potential biomarkers was extensively studied and a new prognostic protein panel was identified which can be of value for risk stratification.
20

Automated Identification of Adverbial Clauses in Child Language Samples

Clark, Jessica Celeste 10 March 2009 (has links) (PDF)
In recent years, computer software has been used to assist in the analysis of clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as adverbial clauses. Complex structures, including the adverbial clause, are of interest in child language due to differences in the development of this structure between children with and without language impairment. The present study investigated the accuracy of new software, called Cx, in identifying adverbial clauses. Two separate collections of language samples were used. One collection included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. A second collection contained language from 174 students in first grade, third grade, fifth grade, and junior college. There was high total agreement between computerized and manual analysis with an overall Kappa level of .895.

Page generated in 0.0673 seconds