21 |
3-D Model Characterization and Identification from Intrinsic LandmarksCamp, John L. 07 December 2011 (has links)
No description available.
|
22 |
High-Throughput De Novo Sequencing of Transfer RNAs Using Liquid Chromatography-Tandem Mass SpectrometryShi, Wunan 18 October 2013 (has links)
No description available.
|
23 |
Automatiserad sårbarhetsskanning i en virtualiserad CentOS-miljö : En lösning till ett mer härdat C2-ledningstödssystem / Automated vulnerability scan in a CentOS virtual environment : Asolution to a more hardened C2-systemJohansson, Frida January 2022 (has links)
Ett stridsledningssystem fungerar som ett maskinellt stöd för manövrering och kontroll över trupper. Det är via det systemet information ges och eftersom den informationen innehåller hemlig — och viktig — data är det angeläget att den datan är pålitlig. En grundläggande del i utveckling och underhåll av sådana system är att utföra sårbarhetsskanningar och att konfigureringskrav uppfylls så säkra dataförbindelser upprätthålls. Med teknikutvecklingen upptäcks ständigt nya sårbarheter som kan utnyttjas för att ta sig in i ett system vilket ställer nya krav för systemets säkerhet. Genom automatiserade tester sker skanningen kontinuerligt vilket upprätthåller säkerheten i systemet. Två exempel på verktyg som förenklar denna skanningsprocess är Nessus och Tenable.sc, där Nessus utför de aktuella skanningarna och Tenable.sc tillhandahåller ett effektivt gränssnitt för skanningens analys. Denna rapport undersöker skanningsförfarandet på stridsledningssystemet ASTERIX Intag Försvarsmakten (AIF). Det är ett system med fyra servrar där radardata hämtas från Luftfartsverkets nätverk och skickas vidare till försvarsmaktens nätverk. I systemet finns brandväggar medmycketstrikta regler som gör att en skanning inte går att automatisera utan att äventyra den säkerhet som redan är implementerad. Genom att virtualisera AIF kan en annan virtuell maskin få åtkomst till systemet vilket gör en automatiserad skanning genomförbar. Den virtuella maskinen fungerar som en skanningsdator där Nessus och Tenable.sc installeras och som skanningen utgår ifrån. All kommunikation med verktygen hanteras med ett pythonskript för att exempelvis starta en skanning eller ladda ner en rapport där tillgången till servrarna ges via Secure Shell (SSH) när skanningen ska utföras. Att automatisera skanningen i virtualiserade miljöer gör testningsprocessen skalbar. Det möjliggör automatiserade masstester vilket kan användas som ett verktyg för en sårbarhetsanalys, och som i sin tur bidrar och upprätthåller en mer säker IT-infrastruktur. / A command and control system serves as a support for maneuvering and control of troops. It is through that system information is given and since the information contains confidential — and significant — data, it is important that the data is reliable. A fundamental part in the development and maintenance of such systems are to perform vulnerability scans and that configuration requirements are met so that secure data connections are maintained. With technological development, new vulnerabilities are constantly being discovered that can be used to break into a system, which places new demands on the system’s security. Through automated tests, the scanning takes place continuously, which maintains the security of the system. Two examples on tools that simplify this scanning process are Nessus and Tenable.sc, where Nessus performs the actual scans and Tenable.sc provides one effective interface for scanning analysis. This report examines the scanning procedure on the command and control system AIF. It is a system with four servers where radar data is retrieved from the Civil Aviation Administration’s (Luftfartsverket) network and forwarded to the Armed Forces’ (Försvarsmaktens) network. The system has firewalls with very strict rules that make the automation of the scan impossible without compromising the security that already been implemented. By virtualizing AIF, another virtual machine can access the system, making an automated scan feasible. The virtual machine functions as a scanning computer where Nessus and Tenable.sc are installed and on which the scanning is based. All communication with the tools is handled with a python script to, for example, start a scan or download a report where access to the servers is given via Secure Shell (SSH) when the scan is to be performed. Automating scanning in virtualized environments makes the testing process a scalable one. It enables automated mass testing, which can be used as a tool for a vulnerability analysis, and which in turn contributes to and maintains a more secure IT infrastructure.
|
24 |
Technical note: reliability of Suchey-Brooks and Buckberry-Chamberlain methods on 3D visualizations from CT and laser scans.Villa, C., Buckberry, Jo, Cattaneo, C., Lynnerup, N. January 2013 (has links)
Yes / Previous studies have reported that the ageing method of Suchey-Brooks (pubic bone) and some of the features applied by Lovejoy et al. and Buckberry-Chamberlain (auricular surface) can be confidently performed on 3D visualizations from CT-scans. In this study, seven observers applied the Suchey-Brooks and the Buckberry-Chamberlain methods on 3D visualizations based on CT-scans and, for the first time, on 3D visualizations from laser scans. We examined how the bone features can be evaluated on 3D visualizations and whether the different modalities (direct observations of bones, 3D visualization from CT-scan and from laser scans) are alike to different observers. We found the best inter-observer agreement for the bones versus 3D visualizations, with the highest values for the auricular surface. Between the 3D modalities, less variability was obtained for the 3D laser visualizations. Fair inter-observer agreement was obtained in the evaluation of the pubic bone in all modalities. In 3D visualizations of the auricular surfaces, transverse organization and apical changes could be evaluated, although with high inter-observer variability; micro-, macroporosity and surface texture were very difficult to score. In conclusion, these methods were developed for dry bones, where they perform best. The Suchey-Brooks method can be applied on 3D visualizations from CT or laser, but with less accuracy than on dry bone. The Buckberry-Chamberlain method should be modified before application on 3D visualizations. Future investigation should focus on a different approach and different features: 3D laser scans could be analyzed with mathematical approaches and sub-surface features should be explored on CT-scans
|
25 |
Automatic 3D facial modelling with deformable modelsXiang, Guofu January 2012 (has links)
Facial modelling and animation has been an active research subject in computer graphics since the 1970s. Due to extremely complex biomechanical structures of human faces and people’s visual familiarity with human faces, modelling and animating realistic human faces is still one of greatest challenges in computer graphics. Since we are so familiar with human faces and very sensitive to unnatural subtle changes in human faces, it usually requires a tremendous amount of artistry and manual work to create a convincing facial model and animation. There is a clear need of developing automatic techniques for facial modelling in order to reduce manual labouring. In order to obtain a realistic facial model of an individual, it is now common to make use of 3D scanners to capture range scans from the individual and then fit a template to the range scans. However, most existing template-fitting methods require manually selected landmarks to warp the template to the range scans. It would be tedious to select landmarks by hand over a large set of range scans. Another way to reduce repeated work is synthesis by reusing existing data. One example is expression cloning, which copies facial expression from one face to another instead of creating them from scratch. This aim of this study is to develop a fully automatic framework for template-based facial modelling, facial expression transferring and facial expression tracking from range scans. In this thesis, the author developed an extension of the iterative closest points (ICP) algorithm, which is able to match a template with range scans in different scales, and a deformable model, which can be used to recover the shapes of range scans and to establish correspondences between facial models. With the registration method and the deformable model, the author proposed a fully automatic approach to reconstructing facial models and textures from range scans without re-quiring any manual interventions. In order to reuse existing data for facial modelling, the author formulated and solved the problem of facial expression transferring in the framework of discrete differential geometry. The author also applied his methods to face tracking for 4D range scans. The results demonstrated the robustness of the registration method and the capabilities of the deformable model. A number of possible directions for future work were pointed out.
|
26 |
Efficacy of Osteoporosis Diagnosis Using DXA Scans of the Distal Radius in a Group of Male Patients with Osteoporosis: a Retrospective StudyHolt, Nicole, Hamdy, Ronald C., Zheng, Shimin, Clark, W. Andrew, Alamian, Arsham, Morrell, Casey, Piggee, Tommy B., Magallanes, Christian 06 April 2016 (has links)
Osteoporosis is a disease characterized by low bone mineral density (BMD), which compromises bone tissue increasing fragility and susceptibility to fracture. It affects nearly 50% of women and 20% of men over the age of 50, and fractures resulting from osteoporosis cause significant morbidity and mortality. Therefore, patients with or at risk for osteoporosis should be identified before rather than after a fracture occurs. The gold standard in diagnosing patients with osteoporosis is dual X-ray absorptiomerty (DXA). The purpose of this study is to evaluate the usefulness of assessing BMD at various parts of the distal radius (ultra-distal, mid-point, one third, and total) compared to the conventional sites (lumbar vertebrae and proximal femur) using DXA to diagnose osteoporosis. This was a retrospective study on 1,641 male patients over the age of 50 who had undergone bone densitometry (DXA scans) of at least one hip, lumbar vertebrae and distal radius. Ordinary regression and correlation analysis was used to assess the association between the lowest of the bone density scores of the hip or lumbar vertebrae and scans at the various sites on the radius. Comparing standardized scores from the radius method with the lowest standardized scores from the hip or lumbar vertebrae, a highly significant correlation was found, R = 0.59, p < 0.001 for the left UD radius, R =0.59, p < 0.001 for left MD radius, R =0.54, p < 0.001 for the left 1/3 radius, and R =0.60, p < 0.001 for the total left radius. The results indicate that the left radius total is the most accurate in diagnosing osteoporosis in our study population. The results of this study can have far-reaching psychosocio-economic implications showing that DXA scans of the distal radius can be used to effectively diagnose osteoporosis by using inexpensive, low-technology, portable scanners. These findings are particularly relevant to the needs of the undeserved rural populations of Central Appalachia.
|
27 |
Strategies to Improve Quantitative Proteomics: Implications of Dimethyl Labelling and Novel Peptide DetectionBoutilier, Joseph 21 March 2012 (has links)
In quantitative proteomics, many of the LC-MS based approaches employ stable isotopic labelling to provide relative quantitation of the proteome in different cell states. In a typical approach, peptides are first detected and identified by tandem MS scans prior to quantifying proteins. This provides the researcher with a large amount of data that are not useful for quantitation. It is desirable to improve the throughput of current approaches to make proteomics a more routine experiment with an enhanced capacity to detect differentially expressed proteins. This thesis reports the developments towards this goal, including an assessment of the viability of stable dimethyl labelling for comparative proteomic measurements and the evaluation of a dynamic algorithm called Parallel Isotopic Tag Screening (PITS) for the detection of isotopically labelled peptides for quantitative proteomics without the use of tandem MS scans.
|
28 |
SCANS Framework: Simulation of CUAS Networks and SensorsAustin Riegsecker (8561289) 15 December 2020 (has links)
Counter Unmanned Aerial System (CUAS) security systems have unrealistic performance expectations hyped on marketing and idealistic testing environments. By developing an agent-based model to simulate these systems, an average performance metric can be obtained, thereby providing better representative values of true system performance.<br><br>Due to high cost, excessive risk, and exponentially large parameter possibilities, it is unrealistic to test a CUAS system for optimal performance in the real world. Agent-based simulation can provide the necessary variability at a low cost point and allow for numerous parametric possibilities to provide actionable output from the CUAS system. <br><br>This study describes and documents the Simulation of CUAS Networks and Sensors (SCANS) Framework in a novel attempt at developing a flexible modeling framework for CUAS systems based on device parameters. The core of the framework rests on sensor and communication device agents. These sensors, including Acoustic, Radar, Passive Radio Frequency (RF), and Camera, use input parameters, sensor specifications, and UAS specifications to calculate such values as the sound pressure level, received signal strength, and maximum viewable distance. The communication devices employ a nearest-neighbor routing protocol to pass messages from the system which are then logged by a command and control agent. <br><br>This framework allows for the flexibility of modeling nearly any CUAS system and is designed to be easily adjusted. The framework is capable of reporting true positives, true negatives, and false negatives in terms of UAS detection. For testing purposes, the SCANS Framework was deployed in AnyLogic and models were developed based on existing, published, empirical studies of sensors and detection UAS.<br>
|
29 |
Automated Pulmonary Nodule Detection on Computed Tomography Images with 3D Deep Convolutional Neural NetworkBroyelle, Antoine January 2018 (has links)
Object detection on natural images has become a single-stage end-to-end process thanks to recent breakthroughs on deep neural networks. By contrast, automated pulmonary nodule detection is usually a three steps method: lung segmentation, generation of nodule candidates and false positive reduction. This project tackles the nodule detection problem with a single stage modelusing a deep neural network. Pulmonary nodules have unique shapes and characteristics which are not present outside of the lungs. We expect the model to capture these characteristics and to only focus on elements inside the lungs when working on raw CT scans (without the segmentation). Nodules are small, distributed and infrequent. We show that a well trained deep neural network can spot relevantfeatures and keep a low number of region proposals without any extra preprocessing or post-processing. Due to the visual nature of the task, we designed a three-dimensional convolutional neural network with residual connections. It was inspired by the region proposal network of the Faster R-CNN detection framework. The evaluation is performed on the LUNA16 dataset. The final score is 0.826 which is the average sensitivity at 0.125, 0.25, 0.5, 1, 2, 4, and 8 false positives per scan. It can be considered as an average score compared to other submissions to the challenge. However, the solution described here was trained end-to-end and has fewer trainable parameters. / Objektdetektering i naturliga bilder har reducerates till en enstegs process tack vare genombrott i djupa neurala nätverk. Automatisk detektering av pulmonella nodulärer är vanligtvis ett trestegsproblem: segmentering av lunga, generering av nodulärkandidater och reducering av falska positiva utfall. Det här projektet tar sig an nodulärdetektering med en enstegsmodell med hjälp av ett djupt neuralt nätverk. Pulmonella nodulärer har unika karaktärsdrag som inte finns utanför lungorna. Modellen förväntas fånga dessa drag och enbart fokusera på element inuti lungorna när den arbetar med datortomografibilder. Nodulärer är små och glest föredelade. Vi visar att ett vältränat nätverk kan finna relevanta särdrag samt föreslå ett lågt antal intresseregioner utan extra för- eller efter- behandling. På grund av den visuella karaktären av det här problemet så designade vi ett tredimensionellt s.k. convolutional neural network med residualkopplingar. Projektet inspirerades av Faster R-CNN, ett nätverk som utmärker sig i sin förmåga att detektera intresseregioner. Nätverket utvärderades på ett dataset vid namn LUNA16. Det slutgiltiga nätverket testade 0.826, vilket är genomsnittlig sensitivitet vid 0.125, 0.25, 0.5, 1, 2, 4, och 8 falska positiva per utvärdering. Detta kan anses vara genomsnittligt jämfört med andra deltagande i tävlingen, men lösningen som föreslås här är en enstegslösning som utför detektering från början till slut och har färre träningsbara parametrar. / La détection d’objets sur les images naturelles est devenue au fil du temps un processus réalisé de bout en bout en une seule étape grâce aux évolutions récentes des architectures de neurones artificiels profonds. En revanche, la détection automatique de nodules pulmonaires est généralement un processus en trois étapes : la segmentation des poumons (pré-traitement), la génération de zones d’intérêt (modèle) et la réduction des faux positifs (post-traitement). Ce projet s’attaque à la détection des nodules pulmonaires en une seule étape avec un réseau profond de neurones artificiels. Les nodules pulmonaires ont des formes et des structures uniques qui ne sont pas présentes en dehors de cet organe. Nous nous attendons à ce qu’un modèle soit capable de capturer ces caractéristiques et de se focaliser uniquement sur les éléments à l’intérieur des poumons alors même qu’il reçoit des images brutes (sans segmentation des poumons). Les nodules sont petits, peu fréquents et répartis aléatoirement. Nous montrons qu’un modèle correctement entraîné peut repérer les éléments caractéristiques des nodules et générer peu de localisations sans pré-traitement ni post-traitement. Du fait de la nature visuelle de la tâche, nous avons développé un réseau neuronal convolutif tridimensionnel. L’architecture utilisée est inspirée du méta-algorithme de détection Faster R-CNN. L’évaluation est réalisée avec le jeu de données du challenge LUNA16. Le score final est de 0.826 qui représente la sensibilité moyenne pour les valeurs de 0.125, 0.25, 0.5, 1, 2, 4 et 8 faux positifs par scanner. Il peut être considéré comme un score moyen comparé aux autres contributions du challenge. Cependant, la solution décrite montre la faisabilité d’un modèle en une seule étape, entraîné de bout en bout. Le réseau comporte moins de paramètres que la majorité des solutions.
|
30 |
Assessment of lung damages from CT images using machine learning methods. / Bedömning av lungskador från CT-bilder med maskininlärningsmetoder.Chometon, Quentin January 2018 (has links)
Lung cancer is the most commonly diagnosed cancer in the world and its finding is mainly incidental. New technologies and more specifically artificial intelligence has lately acquired big interest in the medical field as it can automate or bring new information to the medical staff. Many research have been done on the detection or classification of lung cancer. These works are done on local region of interest but only a few of them have been done looking at a full CT-scan. The aim of this thesis was to assess lung damages from CT images using new machine learning methods. First, single predictors had been learned by a 3D resnet architecture: cancer, emphysema, and opacities. Emphysema was learned by the network reaching an AUC of 0.79 whereas cancer and opacity predictions were not really better than chance AUC = 0.61 and AUC = 0.61. Secondly, a multi-task network was used to predict the factors altogether. A training with no prior knowledge and a transfer learning approach using self-supervision were compared. The transfer learning approach showed similar results in the multi-task approach for emphysema with AUC=0.78 vs 0.60 without pre-training and opacities with an AUC=0.61. Moreover using the pre-training approach enabled the network to reach the same performance as each of single factor predictor but with only one multi-task network which saves a lot of computational time. Finally a risk score can be derived from the training to use this information in a clinical context.
|
Page generated in 0.0703 seconds