Click a block to view session details
Gov. Brad Little
Kris "Tanto" Paronto
Who should attend?: Enterprise Architects or individuals interested in the state of Enterprise Architecture at the National Laboratories.
Purpose: The roundtable will be an open discussion on current efforts, current challenges, and lessons learned in technology, architecture, and processes relating to enterprise architecture.
In modern IT services delivery, bespoke systems and multitudes of point-solution products are rapidly being replaced with large-scale enterprise cloud services from providers like Amazon, Microsoft, ServiceNow, Salesforce, and others. This briefing will provide a comprehensive construct, known as a service brokerage, for effectively integrating the most common enterprise cloud service platforms, through a single portal with unified identity, end-to-end financial management, and a hybrid cloud e-commerce model. The presentation will focus on the benefits and use cases of a service brokerage, where the key integration points are, and what technologies and methods can be used to fully integrate a mix of private and public cloud individual services to deliver full-scope IT outcomes. During the presentation a large-scale, real-world-deployed reference architecture and business construct for standing up a service brokerage system will be reviewed. Solution elements will include unified invoicing/billing and TBM-consistent reporting across a variety of suppliers; simplification of the IT consumption experience for the end user through heavy process automation using AI & ML; the role and integration of API gateways; carrier brokerage through e-bonding; and cloud brokerage & orchestration.
A strategy for strengthening and enhancing our culture of Cyber Security Awareness using creativity and input from a site-wide focus group.
People present the biggest challenge and number one solution to winning the war against cyber criminals. It is important to do everything that can be done to increase cybersecurity awareness. Smart enterprises need many components to implement effective cybersecurity defenses and increase the security culture. We are working toward just that.
The INL Cybersecurity Awareness Program is changing and growing to try to reach a greater number of users than ever before. Although our current program is good, we are reaching for a more dynamic continually evolving program that is constantly improving. We are not focusing on punishing employees who make mistakes, we are focusing on education, being helpful cyber professionals, understanding the risks, and working together to keep our systems and information safe and secure. We want to promote and develop an ever-changing ongoing security culture with a team component.
We started a Cyber Security Awareness Focus Group and selected volunteers from outside Information Management. We wanted a site-wide perspective. A lot of amazing ideas came from this meeting.
This presentation will review both current and planned future improvements.
The Idaho National Laboratory (INL) currently utilizes separate isolated network management (NMS) installations to monitor the internal and DMZ data network infrastructure. There is also an additional NMS installation from a different vendor that is utilized to monitor servers on the internal network. While the separate installations provide for greater security they are not able to provide the comprehensive data network view that INL desires.
The INL is implementing a single vendor NMS solution that will be reduce our multiple installation instances to a single instance. With the new design we will be able to better leverage the NMS application to not only monitor data network infrastructure but also include servers, applications, and storage. We will also be able to provide a central dashboard that will be able to be viewed by all teams. The new holistic view will better enable teams to pinpoint problem areas and be more proactive rather than reactive.
Gone are the days when there were multiple staff to support a single instance of an application. Now each staff is required to support dozens or maybe even hundreds of systems. This abstract explores some problems and solutions to solve this problem. Automation of Performance Metrics and Notification will support this environment.
It is desired to know with certainty that all systems are online and delivering acceptable service. Fortunately there are metrics at each layer of each system that indicates how well a particular aspect of the system is performing. It is also desired to use past metric data to predict how a system will perform in the future. Once a performance problem has been identified the next step is to analyze and correct the problem or minimize the impact to system performance. At the INL, we use tools such as Solarwinds, Everbridge, Oracle Enterprise Manager (OEM), Splunk, and Watchdog to collect the data and send notifications when thresholds are crossed.
At INL Everbridge handles delivering messages to the responsible staff for differing alert types.
Effective risk management is a proactive exercise that protects an organization and ensures realization of its business goals and objectives. Oak Ridge National Laboratory's Information Technology Services Division has targeted its risk management program as a continuous improvement opportunity. Cybersecurity risks, project risks, and a subset of operational risks were being identified, documented, and assessed, but risks were not visible, and reporting was a manual effort. Staff realized these risks represented only a subset of the total risk the division needed to manage.
ITSD's risk management improvement effort aimed to increase the effectiveness of risk identification, assessment, mitigation and, especially, visibility and reporting. This presentation will focus on steps taken to mature the division's risk management program by (1) expanding the body of risks addressed to include IT operational and business risks and (2) implementing risk identification and monitoring in an automated tool, ServiceNow's GRC. The rollout process, the results of the changes, the current state, future plans, and lessons learned will also be discussed.
Security architectures typically have involved many layers of tools and products as part of a defense-in-depth strategy. Unfortunately, they have not been designed to work together, leaving gaps in how security teams bridge multiple domains. In today's nefarious threat landscape these gaps are magnified and in many cases, pose a hurdle for optimal use of these investments and response capabilities. What is needed is a consistent framework that can provide a common interface for end-to-end visibility, automated retrieval and collaboration in a heterogeneous multi-vendor environment enabling security teams to quickly adapt to attackers' tactics using a range of actions including automated response. Such an approach would enable participants to extract new insights from existing security architectures and improve investigations with more context from key security and IT domains.
This presentation will focus how machine data from across the entire security and IT ecosystem can accelerate such an Adaptive Response Initiative to create a robust and agile defense against today's advanced and increasing complex threats. We will also address how this approach can build confidence in security teams to automate response while optimizing your investments in defense-in-depth tools.
Microsoft Power BI integrated with the Azure data platform enables high-performance, scalable BI systems in the cloud that deliver actionable insights. This session will cover Azure data platform integration, and dive into new Power BI features including aggregations for big data, incremental refresh, and semantic modeling techniques for large models. You will see how to unlock petabyte-scale datasets using Azure Databricks and Azure Data Lake (Gen 2) in a way that was not possible before! You will also learn how to use Power BI Premium to create semantic models over big data that are reused throughout your organizations whether helping HR/Finance or unlocking the next big scientific discovery.
The Consolidated Nuclear Security IS&S organization provides information technology solutions to the overall mission of CNS as well as the NNSA Production Office. As such, the IS&S Information Technology budget is sizable given the vital nature of work performed at the Pantex Plant and Y-12 National Security Complex. Similar to other companies and across all industries, being able to communicate the total cost to develop, deliver, maintain and optimize IT Services in the form of value or ROI can be challenging. Simply put, organizations need a true understanding of where dollars are being spent based on the services required and the services provided.
In response to this challenge, IS&S has taken steps to implement an integrated solution to manage the demand for IT services to include tracking the cost to deliver and operationally maintain those services. By adopting a strategic approach to implementing several key IT Management Frameworks, it is expected both business and customer will have a better understanding and appreciation for the delivery and management of end-to-end IT services including spend and capacity information for future portfolio investment/tradeoff decisions.
In response to leaderships' need for predictive analysis, an interactive visualization of a safety prediction model was developed and made accessible. In 2014, a safety predictive model was developed to discover a set of predictive indicators that, in combination, are effective in predicting the likelihood of a safety incident for an employee at Sandia National Labs (SNL) within the future six months.
Initially, a visualization for the results of the safety model were displayed using histograms for each predictive indicator. Users could only display two histograms at a time and had to refresh the histograms one indicator at a time to determine the highest risks. To re-vitalize the application, improvement was needed for enhanced functionality built on user-driven design choices. The model and visualization were reproduced using the R programing language allowing for greater flexibility and robust statistical power. In using R Shiny, the shortcomings of the previous visualization were corrected.
The new visualization enables users to clearly identify the top safety risk indicators of an organization. Users can then partner with safety professionals to determine specific actions to take that will mitigate the risks. Having taken specific actions, users can measure the success of those actions and share with others as best practices.
Today, there is pressure from management and peers to move our computing environment to the Cloud. With that demand many corporations are succeeding and finding great success while others are finding it difficult and more expensive to migrate to the cloud. So, the question is "why is there such a discrepancy in results when using the cloud"? Change has always been difficult. The change of using a cloud environment can be even more difficult than most changes. Having a migration plan for moving to the Cloud is a must. But what belongs in the plan? Why is the cloud environment different than an on-premise environment?
This workshop will provide a hands-on atmosphere of migrating an existing On-Site environment to the Cloud. The participants will learn how to use the new technologies that are uniquely suited for the cloud. Understanding the cloud paradigm and "serverless computing" changes our approach to building cloud solutions. Unlimited horizontal scaling capacity is an immense advantage but also can be a major drawback if not implemented properly. The workshop will show how to plan and build severless computing solutions that maximize the advantages of the cloud. Imagine an environment that never worries about software versions, data structures, patching, or deployment frequency. The new world of cloud solutions is novel and can be overwhelming but to claim the power of its ability comes through understanding how it works and planning for the success through knowledge. Most companies are migrating from on-premise to cloud as like for like. This is expensive and the value of the cloud is not realized. With the proper planning, that will be taught in this workshop, you can make the right decision for your company on how to use the cloud with great cost reductions and optimization of resources need to run your systems. You will also learn when and what should be moved to the cloud and how to use a hybrid solution (both cloud and on-premise). Come ready to work hard, change your old thought processing and see real live examples of cloud migration solutions. This will change how you see cloud solutions.
This workshop will be taught by those experts that have actual experience of migrating to the Cloud.
Oracle Database Patching Automation is a solution to optimize the execution of database patches every quarter in an innovative, collaborative and dynamic environment with the push of a button. Oracle releases quarterly patches known as critical patch updates and/or bundled patches. With the amount of databases in development, test, sandbox and production data centers that need to be patched this turns into a major manual exercise every quarter for DBAs as well as other resources. There are a number of steps involved in patching each database including downloading the patches, upgrading the OPatch in each $Oracle_Home, shutting down the databases and listeners, applying the patches, starting the databases in restricted mode, applying Oracle SQL scripts, restarting the databases in normal mode and then verifying the opatch inventory. INL has developed a way to automate this process utilizing "Team Foundation Server". The DBAs can now push a button and the patching process is automated. The automated patching solution reduces the effort and time involved allowing for DBAs to have more time for other projects and maintain current systems.
Node Slicing enables network operators to create multiple partitions on a single router. Each partition behaves as an independent router, with its own dedicated control plane, data plane, and management plane, allowing the implementation of multiple services on a single physical router. Node Slicing provides a new way to converge networks, scale infrastructure, deploy services, and manage risk.
This presentation will include an overview of node slicing and a discussion of how it works and how it might be leveraged in R&E networks. A node slicing pilot and evaluation recently completed by Internet2 and several major regional R&E networks will also be discussed.
The significance of establishing an organization's software asset position is crucial in regulating institutional spending and ensures an organization is in compliance. Los Alamos National Laboratory (LANL) encompasses over 16,000 desktop and laptop endpoints. Balancing the number of software license purchases and usage in an enterprise environment with the number of licenses consumed should align with standards and best practices. LANL's IT department has selected Flexera as an enterprise Software Asset Management Solution (SAMS) to replace an in-house developed store-front. Our goal is to provide a comprehensive and integrated view of software on each endpoint. We will discuss centralized SAMS processes of software discovery, deployment, utilization, reclamation and reporting to improve software asset management in conjunction with Casper JAMF and SCCM tools. Emphasis will be placed on forward thinking, customer focused solutions and reduction in institutional software maintenance costs.
David Seigel, Deputy Director for Future Cyber Operations, OCIO, U.S. Department of Energy
Description: This track will focus on DOE's implementation of CDM and will provide insights into DOE's roadmap for implementing CDM across the DOE complex. The presentation will include an overview of the CDM Program, how CDM fits into DOE's cybersecurity roadmap, what DOE has on the roadmap for CDM, and some lessons learned during early implementations of CDM across DOE.
Mission work at Sandia National Laboratories (SNL) frequently involves manual analysis of significant volumes of data by experts well-versed in specific domains. This process is often tedious and inefficient for humans, but depending on the domain and the risks involved with inaccurate determinations, complete automation by machine learning may not be acceptable. One possible solution is to use deep learning, a type of machine learning that processes raw data through a series of levels of abstraction, as a tool for aiding and streamlining analyst decisions. This presentation covers one such approach currently being explored at SNL. It is suitable for a broad technical audience, and will include a brief introduction to deep learning with neural networks and a discussion of this application's performance on a real-world classification problem at SNL.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. SAND2019-2300 A.
Critical to our mission as Database Administrators is the availability and security the systems we support at INL. To that end Oracle Software security update and bug fixes are released quarterly. The requirement is to apply the patch within 30 days from being released by Oracle. Additionally the individual databases must also be scanned to point out the security vulnerabilities that are at risk. INL IM is using Trustwave DBProtect to scan and report the vulnerabilities. Both tasks can be done manually, however it becomes a tedious time consuming task costing many man hours.
Today Heidi Nelson will demonstrate how IM has developed a Microsoft TFS automation pipeline procedure to save hours of DBA staff time in order to remain compliant with the Oracle patching mandate. Bill Mitzmacher will then demonstrate how IM has coupled the automation process to include DBProtect scanning and feed those reports to the ServiceNow platform for ticketing and tracking.
Dr. Mallory Stites
The NCCoE's Trusted Cloud project is working to design, engineer, and build solutions leveraging commercial off-the-shelf technology and cloud services to deliver a trusted cloud implementation which will allow organizations in regulated industries to leverage the flexibility, availability, resiliency, and scalability of cloud while complying with applicable laws such as FISMA, PCI, and HIPAA, as well as industry-neutral voluntary frameworks like the NIST Cybersecurity Framework.
The technology stack includes modern hardware and software that can be leveraged to support the described use cases and accelerate the adoption of cloud technology. Building on the work done within NIST Interagency Report (IR) 7904, Trusted Geolocation in the Cloud: Proof of Concept Implementation, NIST's project will expand upon the security capabilities provided by trusted compute pools in a hybrid cloud model to include:
• Data protection and encryption key management enforcement focused on trust-based and geolocation-based/resource pools secure migration
• Persistent data flow segmentation before and after the trust-based and geolocation-based/resource pools secure migration
• Industry sector compliance enforcement for regulated workloads between the on-premises private and public clouds
These additional capabilities will not only provide assurance that workloads in the cloud are running on trusted hardware and in a trusted geolocation or logical boundary, but also will improve the protections for the data in the workloads and data flows between workloads. The team will publish a NIST Special Publication (SP) 1800 document to describe the security properties, architecture design decisions, technology stack, and implementation to support typical enterprise regulated workload use case scenarios across the private and public cloud. The publication will be divided into three volumes to provide contextual information that is customized specifically for executives, enterprise information security officers and business units, and engineers and operators responsible for implementing and managing the solution.
In the wake of 2018's ever-larger breach disclosures, it appears that there is no more personal data to compromise. All of our identifying information: usernames, passwords, bank accounts, medical information, passports, even our DNA profiles, are available to the criminal underground and likely foreign intelligence services for the right price.
This talk aims to inform the attendee just how many informational records have been compromised in the last few years, the data types associated with those compromises and what all this private (and public) data exposure means. What are the implications for the average citizen? The average business? The Government? We will explore what this data is being used for currently as well as future implications for intelligence services, the military, businesses and Government agencies.
John Pringle, one of the Senior Managers in AWE's Cyber Resilience and Information Assurance group, will describe AWE's continuing experience of applying information security management and assurance. AWE is approaching the final stages of modernisation and transition - including the introduction of Office 365 and other Cloud-based services.
Active Defence delivers a bespoke monitoring solution; it also brings new thoughts to modelling threats, assessing vulnerabilities, and managing controls. Governance Risk and Compliance addresses how oversight by the UK's equivalent of the Authorizing Official is being managed to the benefit of both parties. In managing Realistic Risk a pragmatic stance is being applied to a wide range of environments including cloud software services, operational technology and cloud workloads. External Assurance is continuing to be delivered by trusted specialist providers, way beyond traditional penetration testing. Security Architecture addresses how security is embedded at the design stage. And AWE's proactive approach to Supply Chain Information Risk Management is recognised as being leading edge.
Not to mention Brexit.
Oak Ridge National Laboratory migrated SharePoint and email services from on-premise managed servers to Microsoft Office 365 during Fiscal Year 19.
For the Office 365 mail migration, ORNL moved over 7,000 mailboxes from the on-premise Exchange 2013 hybrid environment to Exchange Online. During the migration, users encountered several unexpected service-impacting issues:
• Slow client performance when reading, sending, receiving, and searching email, as well as moving emails from one folder to another
• Mobile phones no longer providing notifications for new email messages, improperly synching calendar entries, and devices displaying incorrect numbers of read/unread messages
• Problems with user's ability to access shared calendars and shared mailbox folders
• The inability to open outlook or send mail after migration
In this discussion, we will walk through the primary issues, current recommendations, and other lessons learned that could assist other facilities during their O365 Exchange Online migrations.
This will be compared to the experience that ORNL had with migration of our internal SharePoint 2010 environment to SharePoint Online. This transition utilized a much different avenue. There are comparisons between the two and overall lessons from the built in synergy of the O365 environment.
No matter how high the quality of your products and services are, if the expectations of your customer are not met, you will fail at customer satisfaction. Setting and managing those expectations is the most critical piece to maintaining great customer satisfaction. Every failure of service is related to customer expectations. Technical, equipment and delivery failures can sometimes limit our ability to meet expectations however, if handled correctly upfront they can be a win.
The three types of expectations - explicit, implicit, hidden Engineering Customer Expectations Influencing Expectations The Delta Principle
Effective configuration management is essential for teams operating a large, complex network. Change notifications keep all team members aware of modifications to the network. Up-to-date configuration backups facilitate recovery from hardware failures. The ability to review changes from certain time periods, or to specific areas of a configuration, can be extremely helpful during troubleshooting. Automated compliance checking and remediation ensures that dozens or hundreds of devices remain consistent with desired configuration templates, and can be invaluable for avoiding security vulnerabilities.
In this presentation I demonstrate a solution for configuration management using Oxidized (a free, open-source tool) and Bitbucket (an inexpensive tool from Atlassian, the same company that sells Jira and Confluence, among others.) This solution began as an experiment in May 2018 to see what might be possible, and has since rapidly spread to production use across multiple Sandia networks. We have found it to be superior to commercial configuration management solutions hundreds or thousands of times more expensive than the integration of Oxidized and Bitbucket.
Dr. Kelly Sullivan
How many ground-breaking ideas are lost because there isn't a quick and easy way to capture them, or because junior staff lack the confidence to pitch them? What if the concept is too small to sell to a usual sponsor, or the idea falls outside of a staff member's typical area of expertise? By engaging staff, PNNL learned about these various barriers, and found that staff needed an easy way to share great ideas and a straightforward way to see them through.
Enter QuickStarter, a crowd-sourcing program loosely based on the KickStarter pitch funding concept, that has engaged more than 60% of all Lab staff to participate, either as an idea proposer or as a "backer". Now in its fifth year of lab-wide operation, the QuickStarter intranet site has provided hundreds of staff with an opportunity to share their ideas, providing funding for almost half of them.
Come see a demonstration of the QuickStarter website and learn about the technology that drives it. We will also talk about success stories that have come out of the program, and its great value for PNNL. Lastly, learn how the QuickStarter tool can be freely downloaded and adapted for use at other national labs with equal success.
Darren Van Booven
According to FY18 FISMA reporting, most agencies still have significant room for improvement in their ability to detect and respond to cyber threats. This talk explores the business case for the cloud-based security operations center (SOC) and how the FedRAMP HIGH options make this an attractive option to consider. We'll discuss a strategic approach to building out a security FISMA enclave that is capable and compliant.
Why the cloud? Models such as zero-trust evolved on the basis of not trusting any device, location, or person until they are proven otherwise. To meet current expectations for identification and protection of threats, the SOC must collect, store, and quickly analyze large quantities of diverse data types. A fully functional SOC today looks like an exercise in advanced data analytics. The elastic storage and dynamic compute needed is where the cloud excels.
Why FedRAMP HIGH? Adding to the complexity, organizational divides often segregate physical security and cyber security data. Operational technology (OT) networks are often segregated from enterprise information technology (IT). The enemy does not care about your concerns. If there's a weakness they can exploit, they will. Despite this, compliance is a reality. The FIPS 199 HIGH controls offer an environment where all sensitive data types can be protected in a compliant manner.
Sandia National Laboratories (SNL)'s Data and Software Security team is strengthening the security posture of SNL's software development practices. We are leading change by moving security earlier in the development process and providing a comprehensive approach that integrates and automates security through the development lifecycle. The approach addresses everything from training to testing and scanning, with the understanding that education and tools must be leveraged for security. SNL wishes to share this approach with the broader community to both educate those that may be more nascent in their journey and to gain feedback on our approach. This presentation will provide an overview of SNL's approach and related presentations provide in-depth coverage of individual topics. By following these steps, regardless of the programming methodology or framework being used, SNL believes that the security of software applications can be improved, reducing both the risk and cost to the organization than if vulnerabilities are found after the product has deployed. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
Dr. Patrick Carlson
Data visualization is an increasingly integral part of communicating analytic results, providing analysis, diagnostics, metrics, dashboards, and more in real-time. There are many visualization tool options, among the forefront of these are Tableau and Shiny.
Tableau is a popular COTS product that provides a desktop-based client for authoring as well as a server for hosting visualizations. It provides a simple drag-and-drop interface that makes exploratory analysis and creating standard charts easy.
Shiny is an Open Source R library developed by RStudio. Shiny leverages the popular programming language R. Developers can use the Open Source RStudio Desktop IDE to develop and test their Shiny applications locally.
Shiny has filled a valuable role in cases where rich user interactions were needed (multi-page wizards, network diagrams, connectivity to web-services, etc.). Since it uses the R language, Data Scientists and Analysts, who are already familiar with R, can get started quickly.
We in the Sandia Data Sciences department would like to share our experiences with both tools - where each excels, their pros and cons, and how to determine which will best fit the needs of the conference attendees. Both tools serve a particular niche. Examples of visualizations in both Tableau and Shiny will be provided.
In 2018 INL made the landmark transition from Google Apps for Government to Office 365 for cloud-based messaging and collaboration services. This presentation reveals the high and low points of the migration, and provides valuable lessons learned. Transitions of this type and magnitude are costly and time consuming. Other organizations can benefit from INL's experiences by learning to recognize and avoid blunders which are unique to the DOE environment. The lessons shared apply not only to Office 365, but other types of enterprise deployments.
a. Despite the carrots and sticks, admonishments and reward gift cards, users in the enterprise continue to make critical missteps. This presentation covers the top five things that users continue to do, despite seemingly obvious (to us!) consequences. Why are organizations, even those with impressive technology stacks and defensive layers, still vulnerable to user mis-behaviors? What can security teams do to shape user behaviors, either to eliminate or mitigate these risks? Bring your best enterprise user awareness solutions as we will share our concerns and hopefully "what works" based on our professional experiences.
AWE plays a crucial role in the UK's national defence. We have been at the forefront of the UK nuclear deterrence programme for more than 60 years. Supporting the UK's Continuous At Sea Deterrence programme and national nuclear security are at the heart of what we do.
Emphasising the user experience, this presentation gives an insight into AWE's Ozone Programme; a significant technology change from on premise networks to a new Microsoft 365 Cloud based solution, with additional aspects to introduce new ways of working, enhance information management and enable agile, flexible working as we transition to the modern workplace.
In this presentation we will share with you our journey so far: where we started, where we've been, where we are now, and where we're heading.
We will cover the following elements:
- The highs and lows of data migration
- A plethora of training and learning approaches
- The importance of easily digestible communications
- The roles and teams that have enabled our success to date
- How to handle change fatigue and even change dread
- Positively exploiting the disruptive impact of the new technology
- Stuff you just couldn't make up
The moral of this story is that whilst technology is a key component in successful corporate change programmes, it should be acknowledged as the enabler of the change, not the sole agent of change. The real challenge lies in managing cultural change, from the strategic direction emanating from the board room, all the way through to the adoption across the workforce at all levels.
Managing elevated and shared access credentials is one of the biggest challenges facing complex heterogeneous organizations today. Administrators must be able to access the systems they manage with sufficient rights to do their jobs, but organizations must control that access to ensure security and regulatory compliance.
Government enterprises must control the use of elevated privileges, but they need to find ways to PIV or CAC enable these accounts. Even with multifactor authentication to "check out" a privileged account steps must be taken to mitigate account compromises. Real-time session analytics provides in-line assurance by watching normal behavior and comparing it to current behavior - with real-time in-line remediation to add a powerful layer of risk mitigation.
One Identity solutions allow all privileged accounts to be vaulted and audited and analyzed in real-time. The solution not only meets requirements for password changes on accounts that can't be CAC or PIV enabled but wraps those accounts with a secure, PIV/CAC-enabled check-in/check-out/auditing solution to know who is using the accounts, how they are being used, and the normal or abnormal behavior of the admin.
News stories over the last several years have highlighted the critical need for effective Supply Chain Risk Management. In this session we will discuss (1) trending threats to Information and Communications Technology (ICT) supply chains; (2) why these threats are more important than ever to the government as well as critical infrastructure and other security-sensitive entities; and (3) how you, your partners, and your suppliers should be managing the risks associated with those threats.
This presentation explores the implementation and potential improvements to the risk assessment model described by NIST Special Publication 800-30 dated September 2012. This document provides a great number of insights into the risk assessment process. This risk register approach is described by the various risk assessment tasks under Step 2. Further details are included in Appendices D through I. At the same time, it gives the end user /organization a great deal of latitude in its interpretation and application.
Further details are provided by the first YouTube "Nov 2013 NIST Workshop". This describes a Qualitative risk assessment instead of the Semi-Qualitative assessment approach claimed. All semi-Qualitative likelihood values were converted into the Qualitative values (Very High to Very Low) and filtered through the Qualitative matrices of Table G-5 and Table I-2. Furthermore, the assessed information derived from the tables in Appendices D through F were not utilized in determining the final risk values.
This risk register model has been implemented in Excel. Innovations include creating numerical equivalents to Table G-5 and Table I-2. Linear data interpolation is used to enable Semi-Qualitative assessments. Only the weighted average algorithm (wAVG) is adopted from those suggested by NIST's Task 2-4. A weighted Root Mean Square (wRMS) algorithm was also implemented. Both algorithms make use of the weight values to either add or delete the numerical results from Appendices D through F. Curiously, both algorithms yield the same results given if the same weight and rating values are inputs. Otherwise, the weighted Root Mean Square (wRMS) algorithm is somewhat more aggressive.
Several examples of every day cyber risk scenarios will be included as a proof of concept. Both the purely Qualitative and Semi-Qualitative results will be included.
Like many major corporations, Sandia National Laboratories has many pans in the fire. One infrastructure team had the bulk of a department's project work and could never seem to get above water. This presentation discusses how we used lessons learned from The Phoenix Project by award-winning novelist Gene Kim.
By understanding the Three Ways and the flow of work in the department, we were able to use process and tools to capture all work and identify it as one of four types: Business Projects, Internal Projects, Changes and Unplanned Work. This useful categorization and display of work in a customizable Kanban board eschewed confusion of prioritization. Coupled with the transparency of metadata fueled reporting and communications, this new process transformed the program.
As Erik Reid, the book's fictional IT guru, said to the our hero, Bill Palmer, "Your job as VP of IT Operations is to ensure the fast, predictable, and uninterrupted flow of planned work that delivers value to the business while minimizing the impact and disruption of unplanned work, so you can provide stable, predictable, and secure IT service."
Network automation and orchestration are sought after in order to achieve network consistency and operational efficiency. Automation and orchestration for the network can lead to a more predictable, secure, and maintainable network infrastructure. It promotes consistency and reliability through well-understood and predictably executed processes. Python, Ansible, and external APIs offered by third party tools provide the framework to build a network that can be driven by software. Our journey through the various levels of network automation have provided a keen insight on some of the tools that are available and how not to use them. The lessons we've learned can help other IT organizations form a plan on how to incorporate automation and orchestration into the daily operations of their networks, with the eventual goal of a network driven completely by software.
Faced with a critical gap in their multi-factor authentication (MFA) coverage and employees frustrated with the hassles associated with using HSPD-12 badges for computer logins, LLNL needed to find a more comprehensive solution to meet the DOE mandate while also simplifying the login experience. This presentation will describe LLNL's computer login journey and the motivations behind a new service called MyPass, which leverages YubiKey technology. The session will present the unique customer-focused approach that was taken to deliver the service, as well as the key architectural pieces including self-service provisioning and bypass capabilities. Finally, the presentation will provide a peak into the future--how LLNL will use MyPass to go beyond computer logins and get closer to a single login method across the site.
In this presentation, we introduce Hubot, a customizable, extensible chat bot, which operates as a first-class user in our DOE-wide chat to support collaboration and resolution of issues. In contrast to traditional email or ticketing systems, chat offers a more natural, real-time forum for teams to discuss and resolve issues. Chat bots, sometimes called "integrations", expand this paradigm further by providing an interface to IT infrastructure for teams to execute resolutions and receive status updates within the chat. Hubot goes even further as a first-class citizen in the chat making it capable of more than just an "integration". For example, Hubot could be configured to listen for "What is the load on our web server?", then query the server's web request logs, calculate the request rate and generate a graph of it, then respond in the chat with an image attachment of the graph. Alternatively, the bot could be configured to continuously monitor resources and alert the team via chat when a threshold is reached rather than waiting for a prompt. We present some of the community-developed Hubot plugins that help streamline processes and improve team productivity in daily operations.
While this paradigm can help us do our job more collaboratively and efficiently, it also raises new concerns. For example, how is Hubot's access to resources controlled, and perhaps more importantly, how is access to Hubot controlled? Additionally, how do we balance Hubot's automated alerts so they don't just annoy other chat members? Where does Hubot store data, and what can it listen to? We address some of the pitfalls we've encountered and how to avoid them.
Finally, we demonstrate deployment of Hubot, how to customize its functionality, and invite members of the audience to propose features they would like to see. While chat platforms continue to grow, extensible first-class citizen chat bots like Hubot are underappreciated resources in the movement, and we hope this presentation raises awareness to incorporate them into workflows.
Data is essential to the future of our nation. The federal government's mission to serve citizens cannot be done without first being able to manage their data and the fact that data management is hard and data volumes are growing isn't news. So, it comes as no surprise that many agencies are leveraging the Cloud and software-defined solutions to modernize legacy IT systems and embark on complex digital transformations. A successful digital organization has a mastery of its data. It knows where all of its data resides and as a result, is able to also visualize, access, protect and migrate data from on-premises to the cloud while also meeting required data compliance requirements. In reality, it doesn't matter how modern systems are if an agency can't harness the power of its data. Attend this session to learn how a comprehensive data management strategy is essential to successfully bridge the gap between IT modernization and true digital transformation.
Assemblyline is an automated malware analysis tool originally developed by the Canadian Communication Security Establishment (CSEC), which supports bulk processing using multiple analysis engines. This presentation will discuss LLNL's usage of Assemblyline, over the past year, to handle automated analysis of manually- and automatically-submitted files, and integration with email and network-based sensors. We will also discuss several custom analysis integrations that LLNL has developed for use in their environment.
Do you like the idea of making external collaboration easy and secure? Sounds great! What about allowing external researchers into your intranet with a single click? Sounds dangerous! But you're intrigued anyway, right? PNNL is excited to share their LabHub solution using Microsoft Teams, SharePoint Online, and other Office 365 services to provide a secure platform that greatly enhances research engagement at PNNL.
Our process allows a PNNL staff member to easily set up a team, invite external project members, and start working with others in under five minutes. All without SecurIDs, VPNs, and all the other legacy tools that give IT a bad name.
This talk is for the technical crowd - we'll cover Azure AD B2B services, Office 365 multi-tenancy, guest accounts, Authenticator, Forms, Logic Apps, JSON, and all the other cool stuff that makes this look easy!
Want to come prepared? Load the Microsoft Teams and Microsoft Authenticator app on your phone and you can join LabHub when we meet!
For nearly a decade, Los Alamos National Laboratory (LANL) has deployed numerous security
tools to measure compliance and improve security posture of information systems. However,
these tools were primarily Windows based, leaving Linux/UNIX operating systems to be
manually secured by system administrators. Red Hat Enterprise Linux (RHEL) is the approved
flavor of Linux deployed at LANL and must adhere to high-level guidance provided by DISA. In
an effort to meet this guidance, LANL IT has implemented Joval Continuous Monitoring, a crossplatform
solution utilizing Security Content Automation Protocol (SCAP) content.
Joval is a host-based SCAP engine that evaluates information systems to discover
misconfigurations and vulnerabilities using standardized content from NIST and DISA. Utilizing
Joval, LANL is able to satisfy the Federal Information Security Management Act (FISMA)
requirements and maintain LANL standards of security.
This research discusses the successes and challenges of implementing Joval to help improve the
security posture of RHEL systems at LANL.
Custom applications that are necessary for operating the business of a national laboratory are traditionally slow to create and deliver. Idaho National Laboratory (INL) has designed an Agile DevOps process that rapidly delivers applications from inception to production within 4 hours, including related infrastructure. Leveraging Microsoft's Azure DevOps Server, formerly known as Team Foundation Server, makes Rapid Application Delivery (RAD) and automation possible. RAD provides consistent, reliable, repeatable, traceable, and immutable deployments. When followed, virtual machines are easily replaceable with all applications re-installed for production. Beyond Microsoft-based application development, RAD has been applied to Java applications, PL-SQL, VM provisioning, patching, and more. This presentation will describe INL's current success using Azure DevOps Server and the future plans for leveraging the technology.
The Linux kernel auditing system provides powerful capabilities for monitoring system activity. While the auditing system is well documented, the documentation and much of the published writings on the audit system fail to provide guidance on what types of attacker-related activities are, and are not, likely to be logged by the auditing system. This talk will show the results of simulated attacks and analysis of logged artifacts for the Linux kernel auditing system in its default state and when configured using the Controlled Access Protection Profile (CAPP) and the Defense Information Systems Agency's (DISA) Security Implementation Guide (STIG) auditing rules. This analysis provides a clearer understanding of the capabilities and limitations of the Linux audit system in detecting various types of attacker activity and helps to guide defenders (system administrators, incident responders, hunt teams and log policy determination) on how to best utilize the Linux auditing system.
Increased focus on research productivity and reliable publication metrics at Idaho National Laboratory highlighted a need to upgrade the lab's publication processing system. The previous system minimally provided compliance but had no ability capture and report on research productivity. A new system would combine efficient review mechanisms with robust capture and reporting of metrics.
Initially, INL chose to pursue implementing this solution in SharePoint. As the project progressed and requirements increased, INL learned that SharePoint was not the best candidate solution platform. The requirements were growing too complex and hit the ceiling of SharePoint capabilities. Ultimately, the decision was made to shift into a more compatible development platform and MVC5 with C Sharp were used. Lessons learned in this project have improved the ability to identify projects that may not fit in the SharePoint environment prior to development. Additional modules are currently being developed at INL for future release.
LRS and SORT use a SQL database, which allows for robust reporting of metrics, both within the applications and with PowerBI. This new depth of metrics reporting allows for complex and meaningful analysis of publication trends useful for making laboratory management decisions.
Do you struggle with how to best communicate critical information with your customer? Do you strive to become a strategic and valuable partner to your customers? Are you working to build business IQ with your colleague departments and establishing trust? If so, you've found the right group to collaboratively develop solid practices for your organizations. We are currently focusing on the areas of Business IQ, Strategic Partnerships, Powerful Communications, and Provider Domain.
This presentation will summarize the work that has been done by this group over the last 4 years to gather experiences across the laboratories, consolidate information to determine areas of focus, and structure a BRM framework for distilling and identifying best practices for our organizations. The approach for determining these practices have been based on leveraging proven approaches including the BRM Institute, ISO 20K, COBIT.
Who are we: We are a combination of 15 DOE labs that have been meeting together monthly online to discuss common issues regarding the delivery of IT services and identify successful approaches for improvement. Come and join the community and discussion. Renew relationships from last year and create new ones in order to share ideas and solutions to common challenges. Also, please join us for our NLIT 2019 Workshop on Friday afternoon, May 31st.
As Federal agencies and regulated industry are migrating to cloud technologies, they still want to have visibility and control of their sensitive workloads which need to be compliant with the business, legal, and regulatory requirements. The National Institute of Standards and Technology (NIST) is collaborating with industry collaborators at the National Cybersecurity Center of Excellence (NCCoE) to design, engineer, and build solutions that will demonstrate how a trusted compute pools architecture can provide assurance that cloud workloads running on trusted hardware support the protections of data and data flows between workloads in a private of hybrid cloud deployment model. In this session, NIST and its industry collaborators DellEMC, Gemalto, Hytrust, IBM, Intel, RSA, and VMware will share the reference architecture and implementation using commercial off the shelf technology to support a lift and shift use case scenario in a hybrid cloud while delivering a set of security outcomes as documented in the draft NIST Special Publication (SP) 1800-19.
Are you involved in Cybersecurity and Phishing Awareness? Come meet and network with your counterparts from across the DOE/NNSA complex; share successes, strategies, tactics, techniques and tips to make your awareness program the best it can be in an open discussion with your peers. Tomm Larson (INL), Becky Rutherford (LANL), and Brenda Ianiro (LLNL) will share stories about what their sites are doing to reduce risks from phishing and other user focused attacks, best practices and lessons learned. Learn more about the new DOE Enterprise Phishing and Cyber Awareness Collaboration Group "Phish Bowl" - a user driven group promoting collaboration on phishing and cyber awareness programs across DOE and NNSA labs and sites. This group offers a great opportunity to connect with your peers via monthly calls and other channels to help you grow your site's phishing and cyber awareness programs.
The goal of this session is to have an open discussion about the endpoint and the agents that are run on these systems. A brief presentation will run through the LANL agent landscape and immediately transition into an open discussion on various topics. These topics include, but are not limited to, agent management, performance, and configuration of WLS, Systrack, SEP, SCCM, Windows 10, and any other agents your organization might have to address any issues on the endpoint. LANL is also in the process of evaluating Carbon Black Defense, Cylance, Menlo, and Fireglass to further protect our assets on the edge. Do you use any of these products? Do you have anything to brag about? Are you on the cutting edge? Please come and share your experiences, positive and negative, about managing your endpoints. LA-UR-19-21611
Captain D. Mark Abrashoff
Classified Data Spills - also known as contaminations or classified message incidents (CMI) - occur when classified data is introduced to an unclassified computer system or to a system accredited at a lower classification than the data. Cleaning up a data spill can be challenging due to data remanence - residual data that continues to exist even after `deleting' a file. For over 10 years, BCWipe has been the de facto standard tool for the U.S. DoD and DoE to erase selected files and data remanence beyond forensic recovery.
A new challenge has emerged with the prevalence of solid state storage, such as SSD - which handles data storage and memory differently than traditional hard disks. In addition to reviewing the issue, the aims of this presentation are to facilitate open discussion about alternatives and to educate about advanced tools, such as innovations in BCWipe, to help solve data spill cleanup on SSD in an effective and resourceful manner.
Sometimes an IT support group can feel like they are living in the Wild West. Changes to the operations of IT systems and networks happen without notice and often lead to instability and downtime for users. Lawrence Livermore National Lab's Weapons Complex Integration (WCI) IT team leveraged Change Management in ServiceNow to circle the wagons, corral the horses, and herd the cats. Maintenance windows were defined, standard changes were created, and rules put in place to avoid changes on a Friday afternoon when all the Sheriffs had gone to bed. Dashboards were created and a Change Advisory Board (CAB) established to track changes and link related incidents.
This presentation will describe the Cowboy environment WCI IT was operating under, and how the use of Change Management in ServiceNow has improved the reliability of daily operations and provided improved stability for users. Attendees will learn details of how WCI IT has leveraged ServiceNow to bring order to the Wild West and the basic tenants of Change Management to understand the value of Change and why they should integrate it into their environments if they are using other tools besides ServiceNow.
Laurence Nichols III
Over the past year, ORNL evaluated, selected and implemented Cylance Protect and Optics as our endpoint protection solution. This presentation will cover the evaluation, selection and implementation those products. I will take you through the process used to determine requirements, identify and interview vendors, narrow that selection and perform a POC. Then I will give an overview of how we went from POC to a completed implementation to over 12,000 MacOS and Windows endpoints. Finally, I will go through how that implementation has impacted our security posture, improved our communication, adjusted our way of thinking and provided some unexpected benefits in the form of providing a quasi-SQA service.
Outside of the information garnered during the selection process, I believe that the planning and evaluation process and lessons learned throughout the implementation cross multiple disciplines from cyber security and client management to project planning. This implementation provided an additional benefit of providing an open-source software security-related SQA process was unexpected but has been one of the more pronounced impacts.
The intended audience for this presentation would be anyone who was interested in updating or changing their endpoint security product as well as anyone looking to perform a product evaluation and wide-reaching implementation.
IT has a wealth of data and information that is captured and managed on behalf of our customers and users. In many cases, data is fragmented in different systems without an easy way to use it. How is that information helping us make decisions? Better yet, how could it be used for data-driven business decisions? How many of your business customers write their own Access or Excel reports connecting to the data directly (and still may not be using it to its fullest potential)? Have you ever said, "There must be a better way!" Well.we hear you, and couldn't agree more.
At PNNL, we are leveraging Microsoft's Power BI to combine different data sources and provide enhanced visibility and information use for our customers. Come see how quick and easy it is to bring life to existing data sources while providing a huge amount of value to customers.
Taking over Maintenance and Operations (M&O) for a development team often consists of being Tier 3 Support, clearing stuck scripts, and bringing the developers pizza. But when the development team is high-performing, simply adding green chile to the pizza isn't enough to justify your paycheck. The team needs to step beyond the basics to transform from a "support team" to a "partner team." One way to do this involves reducing friction for the dev team and their customers; expanding the capabilities of the dev team without impacting developer performance; and by serving as Bridge Builder Extraordinaire and Shameless Promoter for the team and their products. This presentation will describe how the Integrated Solutions Team has done all three of these things to successfully become a partner to one of the highest performing teams in the Complex. We can't promise you our way is THE way. We can't promise you our lessons learned will be turnkey. We can promise you it's been a learning experience and we're excited to share.
As software has become pervasive in our lives, software security exploits have become a recurring theme in news headlines. While static analysis, dynamic analysis, and penetration testing can catch security flaws, they require at least a minimum viable software product to be developed, which lead to increased cost to mitigate findings. In contrast, threat modeling helps to identify security vulnerabilities in software design and architecture early in the secure development lifecycle. While test driven development has become more common in DevOps to help clarify and verify functional requirements in custom software systems, threat modeling is not as widely known or practiced. The Data and Software Security group at Sandia National Labs began an awareness campaign on threat modeling and is developing a process to help software projects design security into their systems. This talk describes the whys hows of threat modeling, the process and tools we use to better engage software developers and stakeholders, and some challenges and lessons learned.
SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525
Security is a topic that is never far from the mind of a good SharePoint administrator. Whether it is planning a new system, or maintaining the integrity of an existing implementation - good security management should always be in the forefront. Lack of proper security settings can cause major issues such as unauthorized access; however, it can also cause wasted time and effort, which in turn can be very costly. In this presentation, we will discuss the flexibility and power of the `out of the box' SharePoint security model, and provide some top tips for implementing and maintaining best practice to prevent such issues.
The consequences of critical software security vulnerabilities are especially high for government agencies and national laboratories but can be remedied with secure software development practices. While no single activity will ensure the security of the software development lifecycle (SDLC), education and training are key to meaningfully changing existing security culture and to the development and maintenance of secure software.
Cohesive, enterprise-wide training on how to develop secure software is identified as an industry best practice; however, according to a 2017 study by Veracode: 68% of software developers say their organizations do not provide adequate training. Sandia recognized the immediate opportunity and reward of providing training and developed a plan to equip and empower software development teams to advance the state of software security practices and culture through awareness and technical training, knowledge sharing, and access to resources. The training establishes the basis for a security and adversarial mindset and supports the idea that security is everyone's responsibility. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
Data Center Infrastructure Management (DCIM) is the merging of IT and Building Facilities creating a holistic view so that Power, Space and Cooling can be employed as efficiently as possible. This presentation will cover the products used to manage and monitor 115 and counting Network Closets and 3 Data Centers. We will also show how PNNL used this data for:
• Immediate notifications when thresholds are exceeded
• Problem Determination by plotting trend data
• Timely Maintenance for failing components (especially UPS batteries)
• Realistic Capacity Planning
• Accurate data to determine true impact caused by equipment and AC changes to maximize energy efficiency
• Depository of data to satisfy DOE Data Calls and the basis for automated reporting
We will also touch on the benefit of establishing norms as the basis for Predictive Maintenance.
Cross-site request forgery (CSRF) is a cybersecurity attack where content from a subverted web site forges requests to different web site within the user's browser. The victim site thinks that the user has made the request, allowing the attacker to perform privileged actions such as changing account credentials or transferring money without the user's knowledge or consent. CSRF is not one of the most recent OWASP Top Ten vulnerabilities for web applications, so software engineers and quality assurance specialists may be less familiar with the attack and its mitigations. This presentation will discuss CSRF and its implications in modern, microservices architectures where cross-site requests are often desired. This talk will provide an overview of CSRF and mitigation approaches suitable for a broad, technical audience. Sandia National Laboratories is managed and operated by NTESS under DOE NNSA contract DE-NA0003525. SAND2019-2100 A.
Moving to Office 365 is hard. Moving to Office 365 without making everyone miserable is harder. In the Summer of 2018, BNL used data & automation to migrate about 3000 users on varied platforms from on-premises Exchange to Exchange Online, pulling the bulk of the organization into the cloud. Instead of deploying in a few waves of hundreds or thousands of users, BNL (ab)used a continuous delivery system to provide small batch, locally sourced, artisanal Exchange migrations. This presentation will explain the factors that led to a continuous deployment approach, outline the specific technology used, and discuss how that approach helped us meet our schedule while striking the right balance between disrupting the IT staff and disrupting end users.
Virtual Desktop Technology has been around for several years now. Laboratory IT professionals have implemented this technologies in various iterations, from kiosk models to training center models and others. Some have even attempted to replace production user desktops with it. With each NLIT comes different stories of success and challenges. This talk will take a close look at the building blocks of VDI, industry models, considerations and Assumptions. We will look at, what is a good success criteria, how should I `group' my user base, how do IOPS play into a configuration, how do I deal with custom applications and user profiles. Also discussed is application management and OS upgrades and patching. We will examine the credibility factor, "will the design I come up with make me look good or bad with my user base." Lastly I will show how I took a group of uses at LLNL and replaced their dishful, production desktops with virtual desktops and handed them a faster computer than any new computer off the shelf. All for a very reasonable price!
LANL has approximately thirty thousand phone numbers in use and are required to migrate these Time Division Multiplexing "TDM" based services to next generation internet protocol based voice services. The initiative conducted requires services such as. desk phones, hall/lobby phones, fire protection panels, fax, modem, Secure Telephone Equipment "STE" phones, and audio teleconferencing services. LANL is increasing reliability by conducting Geo-graphic system redundancy that would mirror how a cloud based deployment might look, but on campus.
This presentation will describe the evolution of all efforts and all obstacles encountered to achieve a full replacement solution for TDM voice switching. Provide security expectations of LANL's next generation voice services, and discuss the technologies used to achieve geographic redundancy, high availability, and versatility.
Sandia Insights is a living analytics architecture and framework, primarily for the data scientist community, that enables access to analytics data, models and visualizations; team collaboration tools for managing analytic studies; and a deep set of enterprise applications. It is the overarching design of how we want to implement Data Sciences at Sandia.
Rather than just focusing on tools (applications), we also focus on data engineering and the data pipeline. Not just the data tools and products we pick but the data paradigm that we use. It's also the exploratory and collaborative environment through which Data Scientists can leverage distributed storage and computing (HDFS, distributed databases, Jupyter, multiple programming languages, cloud, etc.). It includes the "productionization" of machine learning where we make the analysis and results available via APIs and programmatic means. And lastly, it's the communication layer of providing intelligent search against a robust analytics catalog, as well as compelling reports and visualizations. This new environment will streamline the delivery of analytics in support of problem recognition and data-driven decision making.
Given the complexities of Industrial Control Systems - where safety and reliability are of the utmost priority - and the ever-changing threat landscape to operation(s) is inevitable, approaching cybersecurity through a single lens can leave an organization exposed to unacceptable risks. This session explores some of the different methods for collecting information, which are vital to identify, mitigate and remediate vulnerabilities; system misconfigurations; risky or bad changes to systems; and indications of unexpected or undesired behavior. We'll take an in-depth look at various ways for collecting critical information that can be leveraged to prevent, detect and respond to industrial cyber security events - whether from human error, equipment failure, or malicious activity.
Attendees will learn:
• The key benefits with each collection method.
• The gaps or pitfalls present for the various methods of data collection.
• A risk-based approach to determining where to start and a path to take.
The Kansas City National Security Campus is growing, and the New Employee Orientation (NEO) processes needed to change in order to accommodate bigger classes with more employees, less frequently to reduce the burden on presenters and organizing HR staff. A Smart Card credential is required for logical and physical access at the KCNSC and encoding one badge with face to face enrollment can take 15 minutes. We developed a new process that provides printed and encoded badges for up to 50 new employees in less than an hour the day the new employee is hired. The badges are ready for physical and logical access immediately after issuance and activation. The new process saves an estimated 2.5 FTE's for new employees just waiting in a line for their badges on the first day. This presentation will review the inefficiencies with face to face enrollment processes and outlines the steps and processes created in order batch encode and print smart cards the week before NEO. The new process integrates data from the GSA/USAccess via OneID, local Certification Authorities, and other software provided for encoding the card and for queuing the batched Smart Cards. This allows us to provide LOA4 credentials for people without a PIV credential.
Cybersecurity of high-power electric vehicles (EV) charging infrastructure is critical to the safety, reliability, and consumer confidence in this publicly accessible charging infrastructure. Cybersecurity vulnerabilities for high power EV charging infrastructure could negatively impact the electrified transportation sector leading to reduction in adoption and utilization. This presentation details research efforts focused on understanding high consequence events from cybersecurity threats for high power charging infrastructure. The methodologies are detailed for categorizing, impact ranking, and prioritizing the high consequence events. Additionally, mitigation strategies and solutions are proposed and discussed as preliminary means to increase the robustness and reliability of EV charging infrastructure.
DOE sites, like many other organizations, consider and procure cloud services to meet their software and computing needs. DOE sites procure cloud services in part because they anticipate improvements in efficiency, scalability and cost effectiveness. Most sites have an understanding of how to evaluate cloud providers based on performance metrics such as service levels, scalability, and security. However, DOE sites do not always have a complete understanding of how cloud migration impact their organization's sustainability performance, and what environmental benefits they can claim from their cloud migrations. This session will provide an overview of critiques of the sustainability of cloud computing, and why consideration of the environmental impacts of cloud services is an important consideration for DOE sites. This overview will include a consideration of Federal trends and requirements to procure cloud services, and the importance of appropriately and accurately claiming sustainability benefits.
Industry analysts, including Gartner and Forrester, predict Cloud revenues will increase by approximately 20% in 2019 driven by greater adoption of industries such as Financial Services. For organisations with a different threat profile - like AWE and our US National Laboratory partners - our need to make informed decisions about the extent of cloud adoption goes beyond a financial business case.
The prevalence and impact of cloud-based attacks is increasing. Over 50% of incidents seen by MWR's incident response team in the last year involved the cloud. It is therefore critical to understand the risks and potential mitigating activities and controls. Much of the risk can be mitigated through existing best practice, but this can be combined with extensive automation to improve response times and scale security across large environments.
In this talk, AWE and MWR will share guidance on developing a strategy to cover Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and hybrid deployments within both AWS and Azure. MWR will also share their research on how to identify malicious activity in the cloud that can bypass traditional application and host based detection.
As augmented and virtual reality become more pervasive, benefits have become increasingly evident regarding using this technology to augment, or even replace, use of traditional devices. Immersive technology is especially effective for use cases in training, collaboration, and data visualization, making it very useful and cost effective within Sandia's mission space. However, there are risks inherent in developing applications for these devices that primarily fall into three categories: security, human factors, and device limitations. This presentation will discuss lessons learned and mitigation strategies for building an organizational capability in augmented and virtual reality.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. SAND2019-2277 A.
How many times has someone asked you what does IT do. Can you redirect these users to your Service Catalog that easily describes your services and offerings for your organization? More importantly, is it up-to-date? While it may be difficult for some to understand what a Service Catalog supports, it's important for your organization to establish and mature service delivery and processes for both services and products.
One of the biggest challenges in redefining a Service Catalog is one size does not fit all - there is no right or wrong way. Terminology, Service Level Agreements and the meaning of services can be quite different depending on your audience. Additionally, audience members can vary from business sponsors who pay for services and customers who are consumers of services. Both users are equally imperative. An important thing to remember is a Service Catalog provides better management of IT as a whole and the foundation of IT Service Management.
So how do you begin? In this presentation, we will share our experience on how we addressed both the process and technical aspects of developing a one-stop shop for all service requests; how we engaged our customers, simplified and standardized the fulfillment model by reusing repeatable processes; and how plan to sustain this model and continue to enhance our Service Catalog to support the ever changing nature of IT technologies and service delivery in general.
DevOps and automated security testing have been identified as strategic objectives in the SEIS (Science & Engineering Information Systems) organizational roadmaps for future capabilities at SNL (Sandia National Laboratories). Utilizing these capabilities would aim to improve application security, quality, deployment efficiency, and overall cost. SNL's current security testing service is manual; this involves new application development as well as major enhancements to applications currently in O&M (Operations and Maintenance). Manual testing creates bottlenecks for development teams, is costly, and time consuming. This presentation will discuss how we have created a capability to automate Burp Suite security scanning for discovering software vulnerabilities as part of the continuous integration/continuous deployment (CI/CD) DevOps pipeline.
The market for Public Cloud services has been growing rapidly for years, and Public Cloud vendors have focused on growing their businesses by providing services that meet the requirements of existing and potential customers. This session will examine how Public Cloud services have evolved, break down Public Cloud terminology, and examine the unique challenges associated with adoption of Public Cloud. Key topics will include (1) Overview of Public Cloud Technology, (2) Demand for Innovation, (3) Public Cloud Projects vs Public Cloud Programs, and (4) Modern IT Workforce Transformation.
LLNL IT group has been working with our business, security and scientific community to understand if IOT technologies can improve process automation, enhance security capabilities or provide additional analytical capabilities to the scientific lab and SCADA systems. This presentation will detail some of the IOT specific business use cases and the underlying Cloud based architecture used to develop an IOT institutional service. Live or pre-recorded Demo's of the as implemented use cases on Amazon Web Service will also be provided.
How do you manage your backlog of things to do? Loudest gets done first? Latest shiny object excites enough people to get top priority? But where does that get you and your organization? Possibly stuck in a rut of giving systems attention without a thorough look at whether and how they can be integrated into other systems and priorities (you know, the take system off shelf, upgrade, put back on shelf approach)?
If this is all too familiar, come hear what the IT team at PNNL are doing.
We have been busy developing and implementing a new model for mapping out what's important to the institution and how it all integrates together. It starts by tearing down and reimagining the framework in which we manage and operate the Lab. Then building it up again around ecosystems centered on answering the question, "how do Researchers work?
Come learn more about how PNNL is strategically planning our roadmaps and Integrated M&O investments to transform support of the mission of the Lab.
DevOps is the hot new buzzword in software development. Much like the term Agile software development, DevOps means something different to everyone. Our team at Sandia has elected to have a dedicated DevOps specialist focused on the continuous integration, testing, and delivery of our products. This emphasis has had a measurable positive impact on our legacy, brownfield and greenfield development. These efforts have resulted in automated code analysis, open source vulnerability scanning, the hands-free execution of thousands of unit and integration tests, as well as fully automated deployments for every pull request to a containerized independent sandbox instance of each related application. In this session we will cover how 8 developers integrate DevOps practices to manage 17 applications across 40 servers with our technology choices of Git, Docker, Azure DevOps Server, and other tools.
What will the global demands for nuclear security look like in 10-20 years and what digital strategies should NNSA pursue to be ready for it? There are a lot of digital technology trends that may or may not come to fruition in this time frame. Trends in AI, generative design, model based engineering, simulations, digital twins and others can help inform what our long term vision should strive for. For example, one trend that is definitely gaining traction is the digital twin. Major engineering systems throughout industry are moving towards this new approach to information-driven engineering and product lifecycle management. In this session I will explore some of those trends and weave them into one possible vision about how NNSA can take advantage of those trends to enable our future nuclear security needs.
Industrial control systems are integral to the operations of many of the nation's largest manufacturers as well as to water treatment facilities, oil and gas production, and power plants.
It is becoming increasingly common for organizations to facilitate improved communication and
efficiency across and between organizations or subsidiaries by connecting their control systems to
the Internet. However, in so doing, they become vulnerable to cyberattacks that could result in
significant danger or disruption to city or regional populations or to geographically dispersed production chains.
Considering the seriousness of the consequences, it is essential to develop an understanding of: 1) how to integrate safety and security requirements into the control software, 2) how to design and develop tools that are capable of observing large data streams of the physical processes and detect anomalous behavior, 3) how to prevent and detect Advanced Persistent Threats (APT) in ICS? A better understanding of these topics is vital to provide the control technicians and engineers with tools to implement the security and safety functionality required to protect control systems against adversarial attacks.
Sandia National Labs End User Computing support is responsible for the delivery of a wide variety of services to a multi-site, diverse, and multifaceted work force. This is done through partnership with a contract support team. We were posed with the question that faces many of how we accurately measure the success of the services provided to the end user in a way that takes into account the entirety of the customer experience.
As IT is constantly growing and the scope of the services transforms, measuring them to the same standards becomes increasingly difficult and often foolhardy. Sandia has taken a scorecard approach to the monitoring of service delivered by our contract partners.
This presentation will go over our initial approach to metrics and how we transformed our contract measurement by implementing a balanced scorecard. We will discuss the implementation of metrics across the contract and mapping to contract critical success factors: Safety & Security, Staffing, Service Delivery, and Quality. Service specific scorecards rolling into Contractual SLAs has transformed our contact relationship, identified areas for service improvement, and enables us to manage end user computing support holistically.
User Experience has become a differentiating factor for a number of leading technology companies, and IT@PNNL is making it a focus of their own custom application development. We've developed a design system and supporting library of React components to make it easier for developers to deliver a high-quality experience that users will immediately recognize and be successful with. UX Architect Geoff Elliott will provide a brief overview of design systems, walk through the history of design and problems faced by internal software development at PNNL, and demonstrate the system of components and accompanying documentation that are in use today at PNNL. You'll also get a sneak peak of what challenges the system will be evolving to tackle next.
The Department of Energy's vision was to automate its IT Asset Management (ITAM) process, specifically, the way outdated software and hardware were identified. The goal was to mitigate the risk of cyber-attacks by reducing outdated IT assets and associated security vulnerabilities. The idea of connecting IT assets to cybersecurity had never been accomplished on an enterprise level and DOE's innovation was years ahead of both government and commercial peers. One of the biggest challenges is identifying outdated IT software and hardware assets within the enterprise in a timely manner. The difficulty is with aggregating and normalizing IT asset raw data from different data sources to create a single version of accurate relevant information. There was no easy way to quickly and accurately identify IT assets nearing end-of-life thus presenting possible cybersecurity vulnerabilities that could result in greater risks, legal and financial liability.
Join this session to learn how Flexera solutions enabled the Department of Energy to use normalized IT asset data to properly assess the vulnerability of existing software and hardware to be proactive and not reactive to decisions related to IT assets, make decisions based on informed fact-based data, create transparency so stakeholders are aware of security priorities/risks, increase accountability across the agency and improve communications with timely responses.
In the Virtual Desktop Infrastructure (VDI) world, Microsoft Windows has been the dominant Operating System for Desktops. But what about the Linux Desktop and VDI? VMWare has Linux for its Horizon View Solution (BLAST) but no Zero Client is available. Our organization has been looking for years for a replacement for the Oracle Sunray Product, which had worked for our users for many years until it was no longer supported. We needed to find a product that would allow us to have less of a Server Room
footprint but still provide a great user experience. We are currently moving to Linux VDI and utilizing our hyperconvergence Nutanix Cluster. Many cutting edge Zero and thin clients were tested. Our team tested PCoIP, Cloud Access Software and Blast protocol. Hoping to find a solution that would give the end user a high performance, ultrasecure remoting and endpoint architecture. Our hopes were to find a solution where data stays secure and never leaves the Virtual Environment but also supports for NVIDIA GPU's for high-end graphical displays. We finally found a product, PCoIP Private Cloud Access Software (CAS), which would allow us to provide VDI to our Linux Customers.
Since 2011 DHS has mandated cloud first, as of 2018 this became cloud smart, I will be discussing the nitty gritty of working in an accelerated push to the cloud. The use of FedRAMP versus commercial Cloud. Some of the big hurdles that apply to commercial as well as public sector entities, the reality of working within the frameworks provided by OMB Max, NIST and PMOs. Mandated controls such as a Trusted Internet Connection (TIC), using PIV for authentication, endpoints running in FIPS mode, integration with identity management systems, and federated authentication.
Learn about SLAC's journey from ServiceNow "IT ticketing" to leveraging the Now platform and Nuvolo application to enable scalability and efficiency of the Lab Mission and Operations .
Use cases include:
• Manage Science User Facility operational work online and control inventory of parts and assets.
• Establish a portal for Site Facilities services, streamline the trade-shops work, automate customer chargeback, and administer a preventive and corrective maintenance program.
• Design, develop and deploy small applications serving custom work processes.
Idaho National Laboratory was dealing with several issues in regards to our disaster recovery strategy. Among the issues INL needed to address were aging hardware, rising software costs, backups stored in a single location, and recovery times that could take several days depending on the size of the data. After meeting with a number of vendors and several proof of concepts we purchased Cohesity, a hyper converged clustered backup appliance.
Cohesity allows INL to instantly recover virtual machines without having to copy data across the network. INL was also able to implement a production and DR cluster at a lower cost than our previous hardware\software combination cost us for a single site. Compression and dedupe rates are near 100X which is 4 times better than our old system and we were able to eliminate single points of failure by installing clusters at two sites. INL is now able to recovery our data even if we suffer a complete loss at one of our data centers. INL solved most of our hurdles with DR, lowered our costs, and upgraded to the latest cutting edge technology with a single purchase.
As social engineering remains a top threat vector, end-user security must be included in a defense-in-depth cybersecurity solution to protect all aspects of an organization from attack, especially its personnel. Fermilab has taken a proactive approach to this in the last year by upgrading and completely redesigning its cybersecurity awareness platform. A brand-new cybersecurity awareness website was designed, containing blog articles, printable handouts, video lessons, and a current list of phishing emails seen at the lab. Additionally, new branding was designed. Fermilab has revamped its Security Awareness Day and Training programs and introduced a "Cyber Sleuths" program. To evaluate the effectiveness of this program, phishing analysis and reporting is being used, transitioning from manual processing to automated response and integration into Fermilab's Cybersecurity Infrastructure. This talk will discuss the revamp of Fermilab's cybersecurity awareness program and its impact on the Fermilab community.
Deep learning is a powerful tool for image analytics but applying deep learning models to streaming data sources can be difficult when the streams vary in size, they involve both text and linked images, and different models need to be applied to each image. The Open Source Data Analytics (OSDA) team is using AWS to develop a deep learning image classification pipeline that addresses these challenges. The pipeline uses a reactive, message-driven, serverless architecture based on AWS Lambda and Amazon Simple Queue Service (SQS) to enable it to scale up for large data streams and down to conserve resources for smaller streams. The Dockerized models run on auto-scalable infrastructure to meet the varying demands of streaming data sources and Amazon DynamoDB was used to solve difficult serverless state management challenges. We will describe the design of the system, the challenges we encountered, and the lessons we learned while developing it.
How do you handle real or perceived emergencies that requires an immediate and dedicated response from technical resources to a large, widespread user group but not large enough to be a Major IT Incident? Take a journey on the path to resolving these issues and maintaining consistency with IT operations and maintaining customer satisfaction. We will learn how to respond to emergencies that would normally disrupt IT business operations in a way that is beneficial, structured, documented, repeatable and generates quickest resolution time as well as negligible disruption of services to operations as possible.
Software evaluation and testing plays a pivotal role within the Los Alamos National Laboratory by insuring that the products needed to provide mission critical support are secure and functional throughout the organization. Our software testing team is responsible for providing functional, graphical user interface evaluations and testing for hundreds of products. Due to the significant variety of our current testing infrastructure, there have been just as many questions posed, as there are platforms we support. How do other labs perform their software testing? Do other laboratories utilize software tools or programs in order to automate their testing infrastructure? I would like to invite others to have an open discussion regarding the improvements and changes within the software testing community. In addition, this presentation will provide a strong understanding of how our team provides efficient and accurate software testing results, what steps we have taken in order to improve our current testing procedures, and lastly, how we intend to advance our software testing team in the future. Concluding the presentation, I want to further encourage discussions on how we as Software Testers, can use tools such as Eggplant, Selenium, Vagrant, and Packer in order to develop and implement a completely automated testing environment that will enable more accurate and detailed testing results. This presentation aims to provide all software testing teams with an enhanced ability to aid in the mission statement of all National Laboratories while encouraging communication between organizations in order to develop a more dynamic and technologically advanced testing environment.
ServiceNow (SN) proclaims, "With the Now Platform App Engine as your digital foundation, you can build solutions that work the way you work." That's exactly what ORNL is experiencing. SN at ORNL has evolved from a Service Management System for IT (change/incident/problem management) into a multi-faceted platform that benefits all organizations across ORNL. With this presentation, I will show you ways in which SN answers the call to provide tools for ORNL to better work the way ORNL works. I will also show you the limitations and licensing issues around developing SN applications for organizations outside of IT.
Are you a member of the Information Technology (IT) community seeking advancement in your career? Whether you work on a service desk, perform computer systems administration, write cyber security policy, code software applications, or perform another IT role, you may be wondering how to get to the next step in your career. IT managers from various national laboratories will share their insight on career growth including discussion of career pathways and promotions, the differences between IT careers at labs versus the corporate world, and to what extent college degrees or certifications do or don't present barriers. Bring your questions to what's sure to be an insightful and engaging discussion about navigating your IT career.
When looking at the most revered companies and products on the market, what really sets them apart boils down to User Experience-carefully crafted products meeting the needs of users, elegantly. However, all too often hundreds of thousands, if not millions are spent developing products without truly understanding the needs of the users. Therefore, it should come as no surprise that products that don't quite meet their needs are built; and to make matters worse, upon finding this out, it's already too late and the project is way over budget. Sound familiar?
Learn some methods to mitigate risks early on while helping ensure you're meeting the needs of your users and making their jobs easier; after all, we want your users to actually enjoy using the products your team develops. We'll cover topics ranging from capturing requirements to rapid prototyping while keeping users and stakeholders confidently and closely engaged along the journey.
The solution for collaboration on sensitive data in a distributed environment, while problematic is possible in many ways.
What does your organization use? How do you manage it? What has been your users experience?
This Community of Interest session is for sharing the solutions that have been implemented and the lessons that have been learned during the process.
Dr. Randii Wessen
In this panel discussion, student members and recent graduates of the MS-ISA Mentoring Group will discuss their initial misconceptions of IT, what they could have done ahead of time to better prepare for the transition, their top lessons learned from the process, and their advice to business students interested in making the same transition.
As part of our agile software development and delivery lifecycle we have implemented an automated process that creates "release notes" documentation. Our implementation is centered around GitLab's DevOps application and utilizes its built in issue tracking, code repository, continuous delivery pipeline, application programming interface, and wiki documents modules. The engine of the process is a Node JS application that executes inside a Docker container which handles the api calls to first get development sprint issue data, categorize and summarize that data, and finally post it as a wiki document. This process has allowed our software developers to spend more time on coding rather than documentation, given our project managers an easy way to verify delivered work against business requirements, and has increased confidence in the quality of the product with our project stake holders. We feel this process can benefit other Information Technology organizations within the National Lab community and we would be able to present a demonstration of the process as well as discuss ways that it could be fit into different software lifecycles and with other DevOps applications.
We are living in a world of increasing automation. The field of Project Management is no exception. The last time many PMs performed the actual calculations for EVM or determined the critical path of a project was when they were studying for the Project Management Professional (PMP) certification. Why is that? Simple; because these activities are automated by applications and software. These are things that a PM can no longer hang their hat on to be considered effective. As we move into the future IT PMs need to be able to do things differently. Their skillsets need to focus as much (or possibly more) on the ability to communicate, collaborate, build relationships, and drive value delivery. The ability to do these things will allow IT PMs to better understand business needs, elicit & analyze requirements, and then work with technical teams to develop solutions and thus be successful. There will be a fundamental shift in the PMI Talent Triangle, with a decreased emphasis in Technical Project Management Skills and an increased emphasis on Leadership along with Strategic & Business Management.
NLIT 2019 Abstract: PNNL's Migration to the Cloud
As you start your journey to migrate your internal applications there are lots of pieces to consider. This presentation will go through things to consider and how PNNL used the Azure Scaffold documents to design our layout. We created a project to build the foundation "Cloud Ready@PNNL" to tackle this effort. We are taking the modernize then migrate approach from an application perspective, so Hybrid Cloud is a big piece of the story.
I will go through several of the following topics:
• General things to consider for a migration
• Subscriptions / Resource Groups, our design and why we decided to lay it out this way
• Our approach to internet traffic design and overall traffic routing
• Naming Strategy / AD Group Strategy, break down some of the thoughts around our approach to naming and the tie in to our resource group strategy
• Monitoring / Security, talk about where we are in our security, guardrail and monitoring strategy
To wrap it up I will cover some of the gotchas that we ran into along the way.
This talk will discuss the setup of a Secure DevOps pipeline for an internal project as Sandia National Laboratories using open source tools. DevOps attempts to speed up how quickly new versions of software can be released to end users. That software must be secure, so it is critical to integrate security tools into the DevOps pipeline. Such tools include static and dynamic application security testing tools along with custom built, automated security tests. Open source security and testing tools such as FindBugs, SpotBugs, OWASP Zed Attach Proxy (ZAP), Selenium, TestNG, Cucumber were used and integrated into a Jenkins DevOps pipeline whenever possible. Besides benchmarking security automation tools in a realistic environment, other initiatives associated with this work included documenting and sharing the lessons learned, challenges encountered, and identifying future work. These findings and how they led to a Secure DevOps pipeline that improved the security posture for one mission-support software system at Sandia will be discussed. SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.
This Technical talk will focus on Fermilab's migration of Office 365 from Identity Provider ADFS to Ping Federate. Highlights will include reasons for initiating the migration, identifying the proper pre-requisites to perform the migration, and all necessary technical work involved with Federated Domain Management, establishing an Office 365 Connection with Ping Federate, proper testing and lessons learned.
In our collective run to the cloud we have all received some hard knocks, hit some big wins, and muddling our way through. Let's sit down together and share lessons learned and discuss what is working and the next steps in maturing our cloud programs. We will have a facilitated discussion covering the following cloud topics and others as they arise.
1. Transitioning skill sets and mind sets to cloud solutions
2. Managing cloud procurement and costing (chargebacks, resource tagging, etc.)
3. Risk assessment, auditing, and cloud authorization
4. Incident response and continuous monitoring
5. Change management
Argonne has silos of IT support, and until recently each organization managed their own IT asset lifecycle. This has made supporting and license tracking a nightmare. Over the last year, the Business and Information Systems organization has led an initiative to centralize the IT Asset lifecycle across Argonne.
Attend this session and learn how we engaged stakeholders, created a working group with representation from different organizational silos, created new processes, reduced tools, and are currently in the process of rolling out site-wide solution.
Argonne's journey from an in-house commodity IT support model to a vendor led, managed service provider approach, has been a long and winding road. As we approach the first renewal period, we would like to share some lessons learned along the way. We will cover how we got here, how the vendor experience has changed the way our customers interact with our teams, what is better, what is worse, and what we want to change going forward. We will also touch on how we have leveraged our partnerships with other labs to forge a new approach to contract management around our larger vendor engagements.
CodeVision is a malware analysis platform developed at Los Alamos National Laboratory that helps speed up cyber security incident response without sending samples to externally hosted products for analysis. Our goal is to make CodeVision cheaper to maintain, easier to scale, and easy to deploy. We decided to update the CodeVision tool with an open source malware analysis framework and create a containerized application. This application is meant to facilitate analysis of malware-related files, leveraging as much knowledge as possible in order to speed up and automate end-to-end analysis. We have made modules that give near identical functionality to the original CodeVision along with additional modules including a module that check if files are listed in the National Software Reference Library and another which submits files to a FireEye sandbox.
Challenged by an environment that includes numerous heterogeneous devices, vendors, and a perimeter that extends into the cloud? ORNL was and has been developing an agile security approach to a sprawling infrastructure and ever-increasing complexity and scale.
This talk discusses how ORNL strengthened their SIEM capabilities through an architecture that is distributed by design for better scalability, improved performance to address future automation and mitigation work, higher relevance through data enrichment and correlation for actionable events and alerting. Building on an Elastic stack platform, ORNL has been able to provide better opportunities for collaboration between researchers and operations, visual analytics for leadership, and help empower other IT groups with rich data sets to reduce the time to pinpoint trouble within the network
Increasing quality while decreasing bug releases and costs is the goal of every software quality group. There are many things to consider with a quality group that exists in a diverse software development environment, one of them being automated testing.
There are many challenges to adding automated testing into a diverse development environment. Some of the traditional approaches do not provide the desired results. The addition of automation can increase testing coverage and decrease time required for testing. Automation can also give rapid feedback on issues found. Automated User Acceptance Testing is not stack specific and can be used across many technologies.
There is a need to have a dedicated quality team to take ownership and overcome the obstacles that would block these efforts otherwise. Building a framework that supports a variety of technologies and software development efforts is key to increasing overall quality and reducing costs of development.
Dr. Roger Hartley
Static Application Security Testing (SAST) Tools are tools that examine source code or binaries of software and attempt to find flaws leading to exploitable security vulnerabilities within them. Sandia National Laboratories (SNL) has used multiple SAST tools and has recently been considering consolidation to a single tool. As a result, a study was performed to quantify the performance of these tools in terms of how well they find actual flaws, how many actual flaws they miss, and how many false positives are reported. SNL's study found that SAST tools miss substantial numbers of actual issues and have substantial false positive rates. These findings agree with recent studies by NIST and CAS (NSA). All these studies found that there was little overlap between the flaws found by these tools, indicating the possible use of multiple tools in a hybrid configuration. This presentation will provide an overview of these studies, discuss how the different ways these tools work could explain findings, and provide factors for consideration for organizations wishing to select a SAST tool.
Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Argonne has struggled several times to build a Configuration Management Data Base (CMDB) and failed. Over the last year we have made great progress at building our CMDB and have reached a point where we are actively using it for effectively managing changes and assisting with resolving major incidents. What changed? What did we do differently this time then before? Was it the People? The Process? The Tool? - combination of all three? Attend this session and find out!
Enterprise Asset Management (EAM) systems are necessary to run the business of a national laboratory along with other business systems. The Idaho National Laboratory has created a Git-centric DevOps system that transforms software development around business systems. This system centered around Microsoft's Azure DevOps enables with the press of a button automatic deployments of software changes across environments, the ability to enable event-driven and fault-tolerant process, and the ability to spin-up entirely new instances of an EAM. Software development has moved towards the Agile methodology to produce software in a more timely fashion that is likely to more match customer needs at time of completion. However, with enterprise systems the software is simply one link in a longer chain. DevOps aims to `Agilify' the other components of providing that enterprise system software service including databases, networking, and infrastructure. This work aims to demonstrate what the Idaho National Laboratory has done with moving ABB's Asset Suite to a DevOps environment. This presentation will explain how shifting both technical and business processes from manual to fully automatic is essential to achieving a responsive business environment. This presentation will also explain the steps and process that were used to identify areas for improvement and then enact the changes necessary to achieve them.