MENU
STIDS 2013 Home
About--
Topic list
Program Cmte
Venue & Local info
Registration
Sponsors
Program--
Keynote Speakers
Tutorials
Agenda & Papers
Programme
Proceedings
Submission
Important dates
Download CFP
C4I
Home
|
SEMANTIC
TECHNOLOGY FOR
INTELLIGENCE,
DEFENSE, AND
SECURITY
STIDS 2013
Schedule of Events
Tuesday, November 12th
09:00 - 09:40 |
|
Registration |
|
|
|
|
|
|
|
09:40 - 17:00 |
|
Tutorials |
Wednesday, November 13th
08:00 - 09:00 |
|
Registration and Breakfast |
|
|
|
09:00 - 09:10 |
|
Initial remarks |
|
|
|
09:10 - 09:30 |
|
Welcome |
|
|
|
09:30 - 10:30 |
|
Keynote Address |
|
Dr. Benjamin Grosof
Highly Expressive yet Scalable Knowledge for Intelligence, Defense, and Security
presentation
|
|
|
|
10:30 - 11:00 |
|
Break |
|
|
|
11:00 - 12:00 |
|
Lightning Talks -
Chair: Kathryn Laskey |
|
|
Big Data for Combating Cyber Attacks
paper
presentation
|
Terry Janssen, Nancy Grady SAIC
|
|
Abstract
This position paper explores a means of improving cybersecurity using Big Data technologies augmented by ontology
for preventing or reducing losses from cyber attacks.Because of the priority of this threat to national security,
it is necessary to attain results far superior to those found in modern day security operations centers (SOCs).
Focus is on the potential application of ontology engineering to this end. Issues and potential next steps are
discussed.
|
|
|
|
|
Hierarchical Decision Making
paper
presentation
|
Matthew Lewis Michigan Aerospace Corporation
|
|
Abstract
Decision making must be made within an appropriate context; we contend that such context is best represented by a
hierarchy of states. The lowest levels of this hierarchy represent the observed raw data, or specific low-level
behaviors and decisions. As we ascend the hierarchy, the states become increasingly abstract, representing higher
order tactics, strategies, and over-arching mission goals.
By representing the hierarchy using probabilistic graphical models, we can readily learn the structure and parameters
that define a user's behavior by observing his activities over time--what data they use, how it is visualized, and
what decisions are made. Once learned, the resulting mathematical models may be combined with the techniques of
reinforcement learning to predict behavior and anticipate the needs of the user, delivering appropriate data,
visualizations, and recommending optimal actions.
|
|
|
|
|
Towards Context-Aware, Real Time and Autonomous Decision Making Using Information Aggregation and Network Analytics
paper
presentation
|
Prithiviraj Dasgupta, Sanjukta Bhowmicks University of Nebraska at Omaha
|
|
Abstract
We consider the problem of real-time, proactive decision making for dynamic and time-critical decision-events where the
choices made for multiple, individual decisions over time determine the final decision outcome of an event. We posit that
the quality of such individual decisions can be significantly improved if human decision makers are provided with decision
aids in the form of dynamically updated information and dependencies between the different decision variables, and the
humans affecting those decision variables.
In this position paper, we propose the CONRAD (CONtext aware Real-time Adaptive Decision making)system that uses
computational techniques from large scale network analysis and game theory-based distributed information aggregation to
develop such decision aids. CONRAD's functionalities are implemented through three subsystems--a decision making
subsystem that updates and mathematically combines information from different decision variables to predict the outcome
of the decision event, a decision assessment subsystem that uses the currently predicted decision outcome to estimate
the future decision trajectory and recommends information collection-related actions to the human decision maker, and,
a network analysis subsystem that uses those recommended actions to dynamically update the dependencies and correlations
between events and people influencing the decision variables.
To the best of our knowledge, our work is one of the first attempts towards combining dynamic decision updates and using
the predicted decision trajectory as a proactive feedback mechanism to dynamically update the correlations between decision
variables so that human decision makers can make more strategically informed and well-aligned decisions towards the
desired outcome of decision events.
|
|
|
|
|
Need for Community of Interest for Context in Applied Decision Making
paper
presentation
|
Peter S. Morosoff Electronic Mapping Systems, Inc.
|
|
Abstract
There is interest in building a community of interest for Context in Applied Decision Making. Warfighters have long
exploited context in decision making. The mystery, therefore, is why the information technology (IT) community that
supports warfighters provides so little IT that exploits context for decision making. One possible answer is the lack
of a forum such as a community of interest that facilitates sharing (a) among those who do or might develop IT that
exploits context for decision making and (b) with warfighters. This paper provides background information on warfighter's
use of context and highlights an IT system that uses computer representations of context in order to facilitate
establishing a community for Context in Applied Decision Making.
|
|
|
|
12:00 - 13:30 |
|
Lunch |
|
|
|
13:30 - 15:00 |
|
Parallel Sessions 1 |
|
|
Session 1A: Uncertainty Representation - Chair: Ranjeev Mittu |
13:30 - 14:00 |
|
Context Correlation Using Probabilistic Semantics
paper
|
Setareh Rafatirad, Kathryn Laskey, Paulo Costa George Mason University
|
|
Abstract
We present an approach for recognizing high-level geo-temporal phenomena, referred to as events/occurrences, from in-depth
discovery of information, using geo-tagged photos,formal event models, and various context cues like weather, space, time,
and people. Due to the relative availability of information, our approach automatically obtains a probabilistic measure of
occurrence likelihood for the recognized geo-temporal phenomena. This measure, however, is not only used to find the best
event among the merely possible candidates Š witnessing the data (including photos), but it can also provide informative
cues to human operators in the environments where uncertainty is involved in the existing knowledge.
|
14:00 - 14:30 |
|
A Reference Architecture for Probabilistic Ontology Development
paper
presentation
|
Richard Haberlin (a), Paulo Costa (b), Kathryn Laskey (b) (a) EMSolutions Inc., (b) George Mason University
|
|
Abstract
The use of ontologies is on the rise, as they facilitate interoperability and provide support for automation. Today,
ontologies are popular for research in areas such as the Semantic Web, knowledge engineering, artificial intelligence
and knowledge management. However, many real world problems in these disciplines are burdened by incomplete information
and other sources of uncertainty which traditional ontologies cannot represent. Therefore, a means to incorporate
uncertainty is a necessity. Probabilistic ontologies extend current ontology formalisms to provide support for
representing and reasoning with uncertainty.
Representation of uncertainty in real-world problems requires probabilistic
ontologies, which integrate the inferential reasoning power of probabilistic representations with the first-order
expressivity of ontologies. This paper introduces a systematic approach to probabilistic ontology development through
a reference architecture which captures the evolution of a traditional ontology into a probabilistic ontology
implementation for real-world problems. The Reference Architecture for Probabilistic Ontology Development catalogues
and defines the processes and artifacts necessary for the development, implementation and evaluation of explicit,
logical and defensible probabilistic ontologies developed for knowledge-sharing and reuse in a given domain.
|
14:30 - 15:00 |
|
Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis
paper
|
Cliff Joslyn, Jesse Weaver Pacific Northwest National Laboratory
|
|
Abstract
In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one
hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional
statistical methods are more robust, but only represent distributional,additive uncertainty. Generalized information
theory methods, including fuzzy systems and Dempster-Shafer(DS) evidence theory, represent multiple forms of uncertainty,
but are computationally and methodologically difficult. We require methods which provide an effective balance between
the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the
needs of both computational complexity and human cognition. Here we build on Jøsang's subjective logic to posit methods
in focused belief measures(FBMs), where a full DS structure is focused to a single event. The resulting ternary
logical structure is posited to be able to capture the minimally sufficient amount of generalized complexity needed
at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment
over the2012 Billion Triple dataset from the Semantic Web Challenge.
|
|
|
|
|
|
Session 1B: Intelligence Analysis - Chair: Katherine Goodier |
13:30 - 14:00 |
|
Recognizing and Countering Biases in Intelligence Analysis with TIACRITIS
paper
|
Gheorghe Tecuci, David Schum, Dorin Marcu, Mihai Boicu George Mason University
|
|
Abstract
This paper discusses different biases which have been identified in Intelligence Analysis and how TIACRITIS, a
knowledge-based cognitive assistant for evidence-based hypotheses analysis, can help recognize and partially counter
them. After reviewing the architecture of TIACRITIS, the paper shows how it helps recognize and counter many of
the analysts' biases in the evaluation of evidence, in the perception of cause and effect, in the estimation of
probabilities, and in the retrospective evaluation of intelligence reports. Then the paper introduces three
other types of bias that are rarely discussed, biases of the sources of testimonial evidence, biases in the
chain of custody of evidence, and biases of the consumers of intelligence, which can also be recognized and
countered with TIACRITIS.
|
14:00 - 14:30 |
|
IAO-Intel: An Ontology of Information Artifacts in the Intelligence Domain
paper
presentation
|
Barry Smith (a), Tatiana Malyuta (b) Ron Rudnicki (c), William Mandrick (d),
David Salmen (d), Peter Morosoff (e), Danielle K. Duff (f), James Schoening (f), Kesny Parent (f)
(a) University at Buffalo, (b) CUNY & Data Tactics,
(c) CUBRC, (d) Data Tactics, (e) Electronic Mapping Systems, Inc., (f) I2WD
|
|
Abstract
We describe on-going work on IAO-Intel, an information artifact ontology developed as part of a suite of
ontologies designed to support the needs of the US Army intelligence community within the framework of the
Distributed Common Ground System (DCGS-A). IAO-Intel provides a controlled, structured vocabulary for the
consistent formulation of metadata about documents, images, emails and other carriers of information. It will
provide a resource for uniform explication of the terms used in multiple existing military dictionaries,
thesauri and metadata registries, thereby enhancing the degree to which the content formulated with their
aid will be available to computational reasoning.
|
14:30 - 15:00 |
|
Managing Semantic Big Data for Intelligence
paper
presentation
|
Anne-Claire Boury-Brisset Defence Research and Development, Canada
|
|
Abstract
All-source intelligence production involves the collection and analysis of intelligence data provided in various
formats (raw data from sensors, imagery, text-based from human reports, etc.) and distributed across heterogeneous
data stores. The advance in sensing technologies, the acquisition of new sensors, and use of mobile devices result
in the production of an overwhelming amount of sensed data, that augment the challenges to transform these raw data
into useful, actionable intelligence in a timely manner.
Leveraging recent advances in data integration,
Semantic Web and Big Data technologies, we are adapting key concepts of unified dataspaces and semantic enrichment
for the design and implementation of a R&D intelligence data integration platform MIDIS (Multi-Intelligence Data
Integration System). The development of this scalable data integration platform rests on the layered dataspace
approach, makes use of recent Big Data technologies and leverages ontological models, and semantic-based analysis
services developed for various purposes as part of the semantic layer.
|
|
|
|
|
|
|
15:00 - 15:30 |
|
Break |
|
|
|
15:30 - 17:00 |
|
Parallel Sessions 2 |
|
|
Session 2A: Context-Driven Decision Making - Chair: Ciara Sibley |
15:30 - 16:00 |
|
Context as a Cognitive Process: An Integrative Framework for Supporting Decision Making
paper
|
Wayne Zachary (a), Andrew Rosoff (a), Stephen Read (b), Lynn Miller (b) (a) CHI Systems, Inc., (b) Univ of So California
|
|
Abstract
Multiple lines of research in cognitive science have brought insight on the role that internal (cognitive)
representations of situational context play in framing decision making and in differentiating expert versus novice
decision performance. However, no single framework has emerged to integrate these lines of research, particularly
the views from narrative reasoning research and those from situation awareness and recognition-primed decision
research. The integrative framework presented here focuses on the cognitive processes involved in developing and
maintaining context understanding,rather than on the content of the context representation at any given moment.
The Narratively-Integrated Multilevel (NIM)framework views context development as an on-going and self-organizing
process in which a set of knowledge elements, rooted in individual experience and expertise, construct and maintain
a declarative, hierarchical representation of the situational context.The context representation that arises from
this process is then shown to be the central point of both situational interpretation and decision-making processes
at multiple levels, from achieving specific local goals to pursuing broad motives in a domain or theater of action.
|
16:00 - 16:30 |
|
Towards a Context-Aware Proactive Decision Support Framework
paper
|
Benjamin Newsom (a), Ranjeev Mittu (b) Next Century Corporation (a), U.S. Naval Research Lab (b)
|
|
Abstract
The problem of automatically recognizing context,the implications of its shifting properties, and reacting in a
dynamic manner is at the core of mission intelligence and decision making. Operational environments, such as the
OZONE Widget Framework, provide the foundation for capturing the objectives, actions and activities of the mission
analyst and decision maker. By utilizing a "context container" that envelops an OZONE Application, we can capture
both action and intent which allows us to characterize this context with respect to its operational modality
(strategic, tactical, opportunistic, or random).
|
16:30 - 17:00 |
|
Dynamic Data Relevance Estimation by Exploring Models (D2REEM)
paper
presentation
|
H. Van Dyke Parunak Soar Technology, Inc.
|
|
Abstract
Analysts in many areas of national security face a massive (high volume), dynamically changing (high velocity) flood
of possibly relevant information. Identifying reasonable suspects confronts a tension between data that is too atomic
to be diagnostic and knowledge that is too complex to guide search.
D2REEM (Dynamic Data Relevance Estimation
by Exploring Models) is a knowledge-based metaheuristic that uses stochastic search of a graph-based semantic model
to guide successive queries of high-volume, high-velocity data. We motivate D2REEM by considering the nature of
knowledge-based search in high-volume, high-velocity data and reviewing current tools. We then outline the D2REEM
metaheuristic and describe the state of progress in applying it to a range of model types, including geospatial
movement, behavioral models, discourse models, narrative generators, and social networks. Finally, we outline
work that needs to be done to advance the D2REEM agenda.
|
|
|
|
|
|
Session 2B: Social Media - Chair: Setareh Rafatirad |
15:30 - 16:00 |
|
Data Analytics to Detect Money Laundering Evolution
paper
|
Murad Mehmet, Duminda Wijesekera George Mason University
|
|
Abstract
Money laundering evolves using multiple layers of trade, multi trading methods and uses multiple components in
order to evade detection and prevention techniques. Consequently, detecting money laundering requires an analytical
framework that can handle large amounts of unstructured, semistructured and transactional data that stream at
transactional speeds to detect business-complexities, and discover deliberately concealed relationships.
Based
on our prior work and a static risk model proposed in the Bank Security Act, we propose a dynamic risk model that
assigns a risk score for every transaction being a potential component of a larger money-laundering scheme. We use
social networks to connect missing links in such potential transaction sequences. Taken together we can provide a
financial sector independent risk assessment to submitted transactions.The proposed risk model is validated using
data from realistic scenarios and our already developed money laundering evolution detection framework (MLEDF)
that we developed earlier using sequence matching, case-based analysis, social networks, and complex event
processing to link fraudulent transaction trails. MLEDF has components to collect data, run them against business
rules and evolution models, run detection algorithms and use social network analysis to connect potential participants.
|
16:00 - 16:30 |
|
Extraction of Semantic Activities from Twitter Data
paper
|
Aleksey Panasyuk (a),
Erik Blasch (a),
Sue E. Kase (b),
Liz Bowman (b)
(a) Air Force Research Lab,
(b)
Army Research Lab
|
|
Abstract
With the growing popularity of Twitter, numerous issues surround the usefulness of the technology for intelligence,
defense, and security. For security, Twitter provides a real-time opportunity to determine unrest and discontent.
For defense, twitter can be a source of open-source intelligence (INT) information related to areas of contested
environments. However, the semantic content, location of tweets, and richness of the information requires big
data analysis for understanding the use of the information for intelligence.
In this paper, we describe some
results in using extraction features from twitter data to determine events, the semantic implications of the
results from the data, as well as discuss pragmatic uses of twitter data for multi-INT data fusion. The results
collected during the period of Egypt Arab spring conclude that (1) many tweets are clutter or noise in analysis,
(2) location information does not always convey the geometrical accuracy of the information, and (3) the aggregate
processing of the twitter data results in real-time trends of possible events that warrant more conventional
information gathering.
|
16:30 - 17:00 |
|
Situational Awareness from Social Media
paper
|
Brian Ulicny (a),
Jakub Moskal (a),
Mieczyslaw M. Kokar (b) (a) VIStology, Inc,
(b) Northeastern University
|
|
Abstract
This paper describes VIStology's HADRian system for semantically integrating disparate information sources
into a common operational picture (COP) for humanitarian assistance/disaster relief (HADR) operations. Here the
system is applied to the task of determining where unexploded or additional bombs were being reported via
Twitter in the hours immediately after the Boston Marathon bombing in April, 2013. We provide an evaluation of
the results and discuss future directions.
|
|
|
|
17:00 - 18:30 |
|
Poster Session / Social Event
|
|
|
|
Thursday, November 14th
08:30 - 09:15 |
|
Breakfast |
|
|
|
09:15 - 09:30 |
|
Announcements |
|
|
|
09:30 - 10:30 |
|
Keynote Address |
|
Dr. Jeffrey Morrison
Exploring the Role of Context in Applied Decision Making
|
|
|
|
10:30 - 11:00 |
|
Break |
|
|
|
11:00 - 12:00 |
|
Parallel Session 3 |
|
|
Session 3A: Cyber Operations - Chair: Brian Ulicny |
11:00 - 11:30 |
|
Towards a Cognitive System for Decision Support in Cyber Operations
paper
|
Alessandro Oltramari (a), Christian Lebiere (a), Lowell Vizenor (b),
Wen Zhu (c), Randall Dipert (d)
(a) Carnegie Mellon Univ, (b) Refinary 29, (c) Alion Science and Tech, (d) University of Buffalo
|
|
Abstract
This paper presents the general requirements to build a "cognitive system for decision support", capable of
simulating defensive and offensive cyber operations. We aim to identify the key processes that mediate
interactions between defenders, adversaries and the public, focusing on cognitive and ontological factors.
We describe a controlled experimental phase where the system performance is assessed on a multi-purpose
environment, which is a critical step towards enhancing situational awareness in cyber warfare.
|
11:30 - 12:00 |
|
Using a Semantic Approach to Cyber Impact Assessment
paper
|
Alexandre de Barros Barreto (a), Paulo Cesar G Costa (b), Edgar Toshiro Yano (a)
(a) Instituto Technológico
de Aeronáutica, Brazil,
(b) George Mason University
|
|
Abstract
The use of cyberspace as a platform for military operations presents many new research challenges. This paper
focuses on the specific problem of assessing the impact of an event in the cyber domain (e.g. a cyber attack)
on the missions it supports. The approach involves the use of Cyber-ARGUS, a C2 simulation framework, along
with semantic technologies to provide consistent mapping between domains. Relevant information is stored in a
semantic knowledge base about the nodes in the cyber domain, and then used to build a Bayesian network to
provide impact assessment. The technique is illustrated through the simulation of an air transportation
scenario in which the C2 infrastructure is subjected to various cyber attacks, and their associated impact
to the operations is assessed.
|
|
|
|
|
|
|
|
|
Session 3B: Alternative Representations - Chair: Charles Twardy |
11:00 - 11:30 |
|
Analyzing Military Intelligence Using Interactive Semantic Queries
paper
|
Rod Moten Sotera Defense Solutions, Inc.
|
|
Abstract
We describe a strategy for performing semantic searches for analyzing military intelligence. Our strategy allows
the analyst and the query engine to work together to reduce a complex query into simpler queries. The answers for
the simpler queries are combined into answers for the original query. The queries can be refined using rules defined
by the analyst or analytics created by a data scientist.
Our strategy uses an alternative approach to semantic
modeling than the state-of-the-art approaches based on OWL. OWL is an implementation of a branch of mathematical
logics designed specifically for semantic modeling called description logics. Our strategy uses a branch of
mathematical logics called type theory. We use type theory because of the long history of developing systems
based on type theory for reasoning interactively. We demonstrate with an example how the strategy can be used
to answer questions posed by analysts that couldn't be answered using conventional methods.
|
11:30 - 12:00 |
|
Sketches, Views and Pattern-Based Reasoning
paper
presentation
|
Ralph L. Wojtowicz
Baker Mountain Research Corp and Shepherd University
|
|
Abstract
The mathematical theory of sketches provides a graphical framework for describing and relating knowledge representations
and their models. Maps between sketches can extract domain-specific context from a sketch, express knowledge dynamics
and be used to manage representations created for distinct applications or by different analysts. There are precise
connections between classes of sketches and fragments of first order, infinitary predicate logic. EA sketches are a
particular class that is related to entity-attribute-relation diagrams and can be implemented using features available
in many relational database systems. In this paper we illustrate sketch theory through development of a simple human
terrain model. We apply the theory to an example of aligning sketch-based knowledge representations and compare the
approach to one using OWL/RDF. We describe the computational infrastructure that is available for working with
sketches and outline research challenges.
|
|
|
|
12:00 - 13:30 |
|
Lunch |
|
|
|
13:30 - 14:30 |
|
Parallel Session 4 |
|
|
Session 4A: Access-Control - Chair: Bill Mandrick |
13:30 - 14:00 |
|
An Ontological Inference Driven IVR System
paper
|
Mohammad Ababneh, Duminda Wijesekera
George Mason University
|
|
Abstract
Someone seeking entry to an access controlled facility or through a border control point may face an in person
interview. Questions that may be asked in such an interview may depend on the context and vary in detail. One of
the issues that interviewers face is to ask relevant questions that would enable them to either accept or reject
entrance. Repeating questions asked at entry point interviews may render them useless because most interviewees
may come prepared to answer common questions.
As a solution, we present an interactive voice response
system that can generate a random set of questions that are contextually relevant, of the appropriate level of
difficulty and not repeated in successive question answer sessions. Furthermore, our system will have the ability
to limit the number of questions based on the available time, degree of difficulty of generated questions or the
desired subject concentration. Our solution uses Item Response Theory to select questions from a large item bank
generated by inferences over multiple distributed ontologies.
|
14:00 - 14:30 |
|
Fast Semantic Attribute-Role-Based Access Control (ARBAC)
paper
|
Leo Obrst, Dru McCandless, David Ferrell
MITRE Corporation
|
|
Abstract
We report on our research effort, called Fast Semantic Attribute-Role-Based Access Control (ARBAC), to develop
a semantic platform-independent framework enabling information originators and security administrators to
specify access rights to information consistently and completely, in a social network environment, and then
to rigorously enforce that specification.
We use a modified ARBAC security model and an OWL ontology
with additional rules in a logic programming and Java framework to express access policy, going beyond the
limitations of previous attempts in this vein. We also experimented with knowledge compilation optimizing
techniques that allow access policy constraint checking to be implemented in real-time, via a bit-vector
encoding that can be used for rapid run-time reasoning.
|
|
|
|
|
|
|
|
|
Session 4B: Crisis Management - Chair: Alessandro Oltramari |
13:30 - 14:00 |
|
Supporting Evacuation Missions with Ontology-based SPARQL Federation
paper
presentation
|
Audun Stolpe, Jonas Halvorsen, Bjørn Jervell Hansen
Norwegian Defence Research Establishment (FFI)
|
|
Abstract
We study ontology-based SPARQL federation in support of coordinated action by deployed units in military operations.It is presumed that bandwidth is limited and unstable.Thus, we need an approach that generates few HTTP requests.Existing techniques employ join-order heuristics that may cause requests to multiply as a factor of the number of joins in a query. This can easily lead to an amount of traffic that exceeds network capacity. We propose an approach that builds an in-memory excerpt of the remote sources, sending one request to each source. A query is answered against this excerpt, which is a provably sound and complete representation of the sources wrt. query answering. The paper ends with a case study involving three military sources used for planning evacuation missions.
|
14:00 - 14:30 |
|
Navigation Assistance Framework for Emergency
paper
|
Paul Ngo, Duminda Wijesekera
George Mason University
|
|
Abstract
Emergencies occur every day at unexpected times and impact our lives in unimaginable ways. In any emergency situation,
there are two type of victims: direct victims and indirect victims. Both will have their current plans disrupted in
order to deal with the emergency. Federal, State, and Local government shave established a 911 system to assist
direct victims. However, there is still lack of assistance provided to the indirect victims.
In this paper, we
propose a Navigation Assistance Framework that allows emergency organizations to provide emergency information that
can assist victims navigating out of the emergency area and reaching their intended destinations in a reasonable
amount of time. We develop an emergency prototype ERSimMon to simulate this capability in a small scale to show the
effectiveness of the proposed solution. In addition, we develop the Emergency Response Application (ERApp) for a
smart phone platform, which intercepts the enhanced Commercial Mobile Alert System(CMAS) broadcast message,
displays the userÕs location with respect to the emergency location on the map and provides navigational
assistance and recommend actions to help the user navigate out of ongoing emergencies.
|
|
|
|
|
|
|
14:30 - 15:00 |
|
Break |
|
|
|
|
|
|
15:00 - 16:30 |
|
Panel Discussion
Semantic Technologies for Big Data - Moderator: Ian Emmons |
|
|
|
|
|
Anne-Claire Boury-Brisset
|
|
Short Bio
Anne-Claire Boury-Brisset holds a Ph.D. in computer science (artificial intelligence) from the University of
Nancy (France). She works as a defense scientist at Defence Research and Development Canada for 15 years. She
is part of the Command Control and Intelligence section, and group leader All-source Intelligence Management.
Her research areas focus on information and knowledge management, ontologies and semantic technologies, applied
to various command and control, and intelligence application problems. She currently leads an applied research
project dealing with All-source intelligence information integration and management.
|
|
|
Ian Emmons
|
|
Short Bio
Ian Emmons has over 22 years of software development experience concentrated in various forms of data management,
with more than 8 years of experience in the Semantic Web community. His Semantic Web experience has emphasized
semantic alignment of multi-source data, coreference resolution, semantic storage and query, and processing of
temporal data. Prior to that, his interests included transactional data caching and replication, concurrent
processing techniques, object-to-relational mapping, document management, and natural language-assisted
information retrieval. His background includes an MA in Mathematics and a BS in Physics (University of Rochester),
3 patents, and 5 publications.
|
|
|
Benjamin Grosof
|
|
Short Bio
Benjamin Grosof is an industry leader in knowledge representation, reasoning, and acquisition. He has pioneered
semantic technology and industry standards for rules, the combination of rules with ontologies, the applications
of rules in e-commerce and policies, and the acquisition of rules and ontologies from natural language (NL).
He has had driving roles in RuleML, W3C RIF (Rule Interchange Format), and W3C OWL-RL (rule-based ontologies).
He led the invention of several fundamental technical advances in knowledge representation, including courteous
defeasibility, restraint bounded rationality, and the rule-based technique which rapidly became the currently
dominant approach to commercial implementation of OWL. He has extensive experience in machine learning,
probabilistic reasoning, and user interaction design.
Dr. Grosof has experience applying core technology for knowledge, reasoning, and related HCI in a wide variety of
application areas, including: trust/privacy/security, contracts, compliance, legal, and services engineering; financial/
insurance services, risk management,and regulations; defense and national intelligence; biomedical research; and data/
decision analytics. From fall 2007 to early 2013, he led a large research program in Artificial Intelligence (AI) and
rule-based semantic technologies at Vulcan Inc. for Paul G. Allen; this centered around the SILK system for highly
expressive, yet scalable, rules. Previously he was an IT professor at MIT Sloan (2000-2007) and a senior software
scientist at IBM Research (1988-2000). He is president of the expert consulting firm Benjamin Grosof & Associates
founded while he was at MIT, and co-founder of the recent start-up Coherent Knowledge Systems.
His background
includes 4 major industry software releases, 2 years in software startups, a Stanford PhD (Computer Science), a
Harvard BA (Applied Mathematics), 2 patents, and over 50 refereed publications.
|
|
|
Terry Janssen
|
|
|
|
|
Larry Kerschberg
|
|
Short Bio
Larry Kerschberg received his PhD from Case Western Reserve University in Systems Engineering. His MS in Electrical
Engineering is from the University of Wisconsin-Madison, and his BS degree in Engineering Science is from Case
Institute of Technology. He has taught at the Catholic University of Rio de Janeiro, the University of
Maryland-College Park, the University of South Carolina-Columbia, and, for the last 25 years at George Mason University.
His research areas include the functional data model, expert database systems, intelligent integration of information,
large-scale information architectures, and semantic search. He has published numerous papers in journals and
conferences, and holds two patents in the area of semantic search technology. His work has been funded by NASA,
DARPA and the National Geospatial-Intelligence Agency (NGA).
He has served as Co-Editor-in-Chief of the Journal of Intelligent Information Systems, published by Springer, since
1992.
|
|
|
|
16:30 - 16:45 |
|
Future of STIDS - Paulo Costa |
|
|
|
16:45 - 17:00 |
|
Wrap Up |
|
|
|
|