C4I Center



     MENU

   STIDS 2015 Home

About--
   Topic list
   Program Cmte
   Venue & Local Info
   Registration

Program--
   Keynote Speakers
   Tutorials  
   Best Paper Award
   Agenda & Papers
   Download Agenda
   Proceedings  

   Call for Papers
   Important dates
   Submission details
   Classified Session  
   Download CFP


   C4I Home


STIDS Logo

SEMANTIC TECHNOLOGY FOR
INTELLIGENCE, DEFENSE, AND SECURITY

STIDS 2015



Half-day Tutorials



Go to:   Tutorial B (A.M.)      Tutorial C (P.M.)      Tutorial D (P.M.)


  Tutorial A:   Mining RDF with SPARQL

Wednesday A.M., November 18, 2015
Faculty: Ian Emmons
Description: This tutorial uses an example-based teaching approach. Participants arriving with a basic knowledge of RDF, should leave with a solid understanding of the syntax and structure of the SPARQL query language as a basis for formulating their own SPARQL queries.

Target Audience: Beginner to intermediate level, technically focused. Some familiarity with the basic structure of RDF and its Turtle syntax will be helpful, though this material will be briefly covered at the beginning of the tutorial.

presentation

09:00 - 10:40 Session 1
Click for Description

  • Introduction to RDF and Turtle
  • Basic Select Queries
  • Filters
  • Optionals
  • Alternatives (Unions)
  • Producing Result Sets
  • Bind and Values


10:40 - 11:00 Break

11:00 - 12:30 Session 2
Click for Description

  • Negation
  • Property Paths
  • Grouping and Aggregate Functions
  • Named Graphs
  • Sub-Queries
  • An introduction to SPARQL Update


12:30             Close



Ian Emmons has over 25 years of software development experience concentrated in various forms of data management, with more than ten years of experience in the Semantic Web community. His Semantic Web experience has emphasized machine-understandable representations of human intent, semantic alignment of multi-source data, coreference resolution, semantic storage and query, and processing of temporal data. Prior to that, his interests included transactional data caching and replication, concurrent processing techniques, object-to-relational mapping, document management, and natural language-assisted information retrieval. His background includes an MA in Mathematics and a BS in Physics from the University of Rochester, three patents, and seven publications.







Go to:   Tutorial A (A.M.)      Tutorial C (P.M.)      Tutorial D (P.M.)


  Tutorial B:   OWL Distilled: Everything You Need and Nothing You Don't

Wednesday A.M., November 18, 2015
Faculty: Michael Uschold and Dave McComb
Description: If you think that getting your (or someone else's) head around OWL is a challenging experience, you have found the right tutorial. There are really just a few foundational building blocks that everything else is built from. There are 1) individual things -- e.g. JaneDoe, 2) kinds of things -- e.g. Organization and 3) kinds of relationships -- e.g. worksFor. That's pretty much it. In this tutorial we will describe the many ways that these things can be combined and used. Most importantly, there are triples that assert relationships between things -- e.g. JaneDoe worksFor Microsoft. There is inference to generate new triples and a few more key things, but not as much as you think. If you are getting started in OWL, this tutorial has everything you need and nothing you don't.
    After this tutorial, participants will:
  • Know what the main things are that one needs to say to build an ontology
  • Know how to say those things in OWL
  • Know that an OWL ontology is a set of triples
  • Understand how inference works, and why it matters
  • Have seen numerous examples from commercial enterprise ontologies
  • Be familiar with the 30% of OWL that gets used 90% of the time
  • Be able to continue learning OWL on their own, knowing how to avoid unnecessary pitfalls

Target Audience: The intended audience includes anyone who is interested in a gentle introduction to OWL, or who wants to be able to better explain OWL to others. This might include: students and professors whose expertise lies outside of knowledge representation or industrial participants with a technical background interested in building ontologies Most appropriate for beginner through intermediate, as well as experts who wish to see things from a fresh perspective driven from real world examples.

presentation
09:00 - 10:40 Session 1
Click for Description

  • OWL building blocks: Individuals, Classes and Properties
  • OWL restrictions made intelligible
  • Triples
  • Inference
  • Assertion Box (ABox) vs. Terminology Box (TBox)


10:40 - 11:00 Break

11:00 - 12:30 Session 2
Click for Description

  • Boolean constructs: Union, Intersection & Complement
  • Common patterns for building ontologies
  • Common pitfalls when learning OWL


12:30             Close



Michael Uschold is an internationally recognized expert with over two decades experience in developing and transitioning semantic technology from academia to industry. He pioneered the field of ontology engineering, co-authoring the first paper and giving the first tutorial on the topic in 1995 (in London). From October 2010, he has been working as a senior ontology consultant at Semantic Arts, training and guiding clients to better understand and leverage semantic technology. He has built commercial enterprise ontologies in finance, healthcare, legal research, electrical power, consumer products, manufacturing and media assets. During 2008-2009, Uschold worked at Reinvent on a team that developed a semantic advertising platform. As a research scientist at Boeing from 1997-2008 he defined, led and participated in numerous projects applying semantic technology to enterprise challenges. He is a frequent invited speaker and panelist at national and international events, and serves on the editorial board of the Journal for Web Semantics. He has given numerous tutorials and training classes. He received his Ph.D. in AI from Edinburgh University in 1991 and an MSc. from Rutgers University in Computer Science in 1982.

Dave McComb is a hands on practitioner and thought leader in the area of applying Semantic Technology to Enterprise Architecture and Applications. For fourteen years as co-founder and President of Semantic Arts he has managed major Semantic Technology projects with over a dozen large enterprises, including Goldman Sachs, Broadridge Financial Systems, Procter & Gamble, Lexis Nexis, Sentara Healthcare, Sallie Mae, and seven different Agencies in the States of Colorado, Texas and Washington. Dave is the author of Semantics in Business Systems, and was the co-founder of the Semantic Technology Conference, the go to place for companies looking to commercialize semantics. He is a frequent speaker and writer on the topic and has inspired many to enter the field. In the early 90's Dave pioneered an approach to ontology development based on facilitated brainstorming in focused Semantic Modeling sessions. In the intervening 20 years he has lead over 200 of these sessions and built almost as many ontologies. Prior to founding Semantic Arts, Dave spent 13 years with Andersen Consulting (the part that became Accenture) designing and building Enterprise Applications for large firms including: Boise Cascade, Georgia Pacific, Wildish Construction, Norton Abrasives, the US Geological Survey, Bougainville Copper, US West and Martin Marietta (now Lockheed Martin). He founded First Principles and co-founded Velocity Healthcare.







Go to:   Tutorial A (A.M.)      Tutorial B (A.M.)      Tutorial D (P.M.)


  Tutorial C:   Building an Enterprise Ontology in Less Than 90 Days

Wednesday P.M., November 18, 2015
Faculty: Dave McComb, Dan Carey and Todd Schneider
Description: This tutorial is strategically aimed at promoting the idea that an Enterprise Ontology is a key first step in rationalizing existing systems and adopting new sources of data, and that the development of such an ontology is not the onerous task it was once believed to be. There is a vital need for enterprises to have a comprehensive enterprise ontology in order to accommodate and survive the tsunami of data that is coming. However, we have discovered that few enterprises have the appetite for project of the duration that most enterprise data modeling projects take. We have spent the last several years working on a more agile approach to building an enterprise ontology, and have practiced it with several clients. A similar tutorial was presented at the SmartData Conference in San Jose this last August and received glowing evaluations (9.6 out of 10 on "overall value of tutorial").

Target Audience: Appropriate for beginner through advanced

presentation
13:30 - 15:10 Session 1
Click for Description

  • The role that an enterprise ontology can play in integrating legacy, unstructured, open and big data initiatives
  • The degree to which our current initiatives have led us 180 degrees away from an integrated model


15:10 - 15:30 Break

15:30 - 17:00 Session 2
Click for Description

  • Characteristics of Semantic Technology that lend it to the key role in creating a stable but evolvable hub for data initiatives
  • Six techniques and methods that have been proven to reduce the size of an enterprise ontology, without affecting its scope, or fidelity and reducing the time to uncover and model the ontology


17:00             Close



Dave McComb is President and co-founder of Semantic Arts, an independent consulting firm specializing in helping enterprises adopt semantic technology in their Enterprise Architectures. He has over 30 years of experience with enterprise level systems and enterprise architecture. He has built enterprise ontologies for over a dozen major enterprises, including Procter & Gamble, Schneider-Electric, Goldman Sachs, Broadridge Financials, Lexis Nexis, Sallie Mae, and several State Agencies in Colorado, Texas and Washington. He was the co-founder of the Semantic Technology Conference and the author of Semantics in Business Systems.

Dan Carey has recently joined Semantic Arts as an ontologist. Dan has had over 20 years data modeling experience with a variety of State and Federal agencies. He most recently led a key portion of the ontology modeling at the Alcohol, Tobacco Taxation and Trade Bureau projects, as well as leading enterprise level data management strategy at the Defense HealthCare Management Systems DMIX program.

Todd Schneider will soon be joining Semantic Arts as an ontologist. Todd has over 25 years of increasing levels of responsibility in data and ontology intensive projects. He was a Senior Principal Systems Engineer at Raytheon where is lead a number of semantic based projects, primarily for Defense Agency customers. Todd has been an active participant in the Ontolog Forum and the Network Centric Operations Industry Consortium.







Go to:   Tutorial A (A.M.)      Tutorial B (A.M.)      Tutorial C (P.M.)


  Tutorial D:   Threat and Risk Information Sharing, Federation and Analytics

Wednesday P.M., November 18, 2015
Faculty: Cory Casanave
Description: Sharing, federating and analyzing threat and risk information across domains, disciplines, communities and organizations is essential to mitigate the sophisticated attack vectors we face across the physical and cyber worlds. The operational threat and risk information sharing and federation model standards effort (1) that is in the final stages of adoption within the Object Management Group (OMG) enables wide-scale federation and sharing that includes an all-threats approach that includes the nexus of cyber and physical. By using a semantic conceptual model (a.k.a. Ontology) implementations of this specification enables "connecting the dots" among threat and risk information sources, technologies, data formats, consumers and capabilities. This tutorial will introduce the threat and risk specification and provide guidance on implementing the standards based capability for Intelligence, Defense, and Security.

Mission and Purpose of the Threat and Risk Specification

Any organization (commercial, not-for-profit, or government) conducts operations and considers various threats and risks that may disrupt these operations. Threats and risks are increasingly multi-dimensional in nature - spanning physical space and cyber space. Due to the complexity, connectivity and global nature of threats faced by modern organizations, effective risk management and situational awareness depends on collaboration, information sharing and analytics. The mission of this specification is to substantially reduce the time, cost risks and overhead of independent entities communicating risk and threat information and of federating, simulating and analyzing that information.

Only by federating information across multiple domains such as cyber, physical, critical infrastructure, criminal, intelligence, health and defense, irrespective of technical and political boundaries, can we effectively counter multi-dimensional intentional threats, natural events and system failures. The situation prior to this specification and subsequent implementations is that there are multiple risk and threat sharing and analytics capabilities in different domains, supporting different disciplines and using different data schema and technologies. While each of these provides value for its purpose, the community has been missing the capability to consider information in context, in combination and with the added value of information from other domains and disciplines. The essential value of information dramatically increases as it is rubbed together with other information. What is not needed is yet another data structure that intends to be the one ring that binds them all, what is needed is the capability to federate and translate between different data structures, technologies, terminologies and human languages relating to risks and threats. Of particular interest is the integration and mapping of NIEM (National Information Exchange Model), STIX (Structured Threat Information Exchange), the NIST cyber security framework and other standards, technologies and products for threat and risk information sharing or analytics.

To meet these goals we seek to define the semantics of risk and threat information, as well as the semantics of the more general concepts from which risk and threat builds, as a conceptual model (Ontology) that is then mapped to and between multiple data sharing formats. Implementations will then be able to map between and federate information from any of these data formats. It is further the goal of the community to then build and deploy capabilities that are able to leverage these models to provide advanced analytics, intelligent simulation and dynamic information sharing.

While this specification is international in scope, statements at the highest levels of the U.S. government are informative. As stated in the recent executive order 3 of the President of the United States: In order to address cyber threats to public health and safety, national security, and economic security of the United States, private companies, nonprofit organizations, executive departments and agencies (agencies), and other entities must be able to share information related to cybersecurity risks and incidents and collaborate to respond in as close to real time as possible. The threat/risk specification provides the fundamental semantic underpinnings of this capability. It does so based on open standards and an open community of interest. The Threat and Risk Specification was developed by a consortium of seventeen major industry and government organizations.


Target Audience: Program managers, architects & developers with at least an intermediate level of expertise.
presentation
13:30 - 15:10 Session 1
Click for Description

  • Mission and Purpose
  • Goal and approach
  • Why this is important
  • Core Concepts
  • Cross-domain information flow
  • Primary use cases
  • Structuring risk information
  • Response concepts


15:10 - 15:30 Break

15:30 - 17:00 Session 2
Click for Description

  • How concepts relate
  • Mapping examples: STIX and NIEM
  • Threat, risk and data
  • Implementation patterns
  • Specification status and OMG timeline
  • Who is participating and how to join us


17:00             Close



Cory Casanave is the chief architect of the threat and risk submission team with decades of experience in standards, information sharing, model driven architecture and semantic technologies. Mr. Casanave is CEO of Model Driven Solutions and a member of the board of directors for the Object Management Group.





Last updated: 11/18/2015