HOW AFFIRMA WORKS

Reduce Complexity. Gain Business Value.

Flow of Capabilities

Affirma is a seamless and single point of reference for data modeling, mapping, analytics, and integration for those who are seeking to digitally transform through their data, to maximize value, to support changing business needs, and to address new technologies and innovation.

Affirma incrementally manages the data journey by building upon capabilities which enables an end-to-end view of data for both internal and external uses with respect to your organization.

Affirma supports full management of your data architecture. Your single source for Enterprise Semantic Modeling and Metadata Management.

Data Catalog to Enterprise Semantic Model

Every vendor has unique data schemas for their solutions. Some utilize established international standards, and others do not. With the Affirma Data Catalog capability you import your different data schemas to manage and align them centrally. They form the building blocks for developing your Enterprise Semantic Model which establishes a common harmonized data definition across your organization.

Start small. Gather your data in the Data Catalog to create your Enterprise Semantic Model for a specific area of interest or project.

Main Reference Data

Enterprise Semantic Model Defines Data Dictionary

One of the main reasons to establish an Enterprise Semantic Model is to provide for a common data definition and understanding of data across your organization, the common semantics. In Affirma as you define each object, attribute, and relationship, you create or import their associated definitions. Taking the Affirma approach the Data Dictionary is generated from the semantic definitions in the Enterprise Semantic Model. This is a simple way to establish a Data Dictionary based upon business user perspectives and how the IT group is storing and moving data within the organization.

Use what you have. Define components which results in your Data Dictionary linking business and IT.

Main Reference Data

The Data Catalog is the basis for Data Profiling

In Affirma the Data Catalog is comprised of data schemas from data sources or interfaces. Because the schemas are captured in Affirma we can associate data profiling to the schemas. You can upload or directly connect to source systems or interfaces to profile data associated with specific data. The IT group gains insights which can be communicated to the business regarding the quality of data. These insights can be used later to ensure the delivery of a quality solutions based on trusted data.

Evaluate your data. Connect the Data Catalog to actual data to understand the quality.

Main Reference Data

Semantic Models enables Data Mapping and Transformation

 

In Affirma Mappings and Transformation are built from the Enterprise Semantic Model or chosen Reference Models. A built-in engine is designed to code and store Data Mappings and Data Transformations. Since within Affirma Mapping and Transformation has established lineage to the Data Catalog which is in turned link to the Data Catalog, you can review and execute Mappings and Transformations with data for verification purposes. The Affirma approach enables reusability of Mappings and Transformations improving delivery times and ensures consistency across your data architecture.

Accelerate delivery. Map and Transform for repeated use once and for all.

Main Reference Data

Data Mapping drives Data and Integration Designs

 

Data Design is using Mappings and Transformations to create contexts for specific data stores and integration schemas. This includes such artifacts as DDL, XSD, JSON, etc. Integration Design is using Mappings and Transformations describing the functionality of the web-service or micro-service including encryption, transportation, security and even network endpoints and ports. This includes generating the WSDL, SOAP, etc.

The Affirma approach has established lineage from start to end and utilizes that same model to design for both data-in-motion (i.e., integrations) and data-at-rest (i.e., persistent data stores).

Ensure consistency. Generate designs for data-in-motion and data-at-rest.

Main Reference Data

Connected Capabilities delivers Data Lineage

 

Data Lineage includes the data origin, user action, what happens to the data and where it moves over time. Data lineage gives visibility while greatly simplifying the ability to trace errors back to the root cause in a data analytics process. It also enables replaying specific portions or inputs of the data flow for stepwise debugging or regenerating lost output.

Affirma is built on a single ontology representing all the data in the system resulting in inherent data lineage. Since the ontology also includes metadata, the ability exists to take a snapshot in time. The Affirma approach is foundational for our future direction to utilize graph-node capabilities. It also allows for innovative AI and ML integration to provide additional forensic activities augmenting visualization and alerts.

Plan for the future. Understand the impact of your data.

Main Reference Data

Connected Capabilities enables Build Automation

 

Affirma provides a cohesive approach of moving from design to deployment to ensure a connected technology landscape. The capabilities in Affirma connect the dots which empowers you to automatically generate design artifacts or even code generation through Build Automation. We work with you to provide existing or new Build Automation for your specific technologies, platforms, and solutions.

Reduce Complexity. Gain Business Value.

Main Reference Data

Governance for Connected Capabilities

 

Affirma capabilities are connected which gives the ability to apply data governance for privacy and compliance at different levels. It can be accomplished through workflows, approvals, validation, audits and more. The data governance framework within each organization differs and is applied differently. That is why Affirma has been designed to allow for you as the user to define your level of desired governance and to expand on it.

Attend to your business. Treat your data as a valuable asset.

Main Reference Data

Technical Overview of Affirma

Affirma is built on standards from the W3C Semantic Web technologies. The semantic web organizes and utilizes information based on meanings (semantics). The resulting architecture ensures the quality, integrity, and openness of your data by guaranteeing interoperability with a large number of heterogeneous applications and data formats. Affirma helps to establish a semantic network of your metadata. This standardization also enables Affirma to carry out advanced analytics and reasoning on your metadata and instance data by leveraging a wide array of existing libraries and algorithms, allowing you to quickly gain deeper insights into your business data and processes.

Examples of core standards implemented in Affirma include:

\

Ontology / OWL, RDF & Graph

Web Ontology Language.  All varieties of physical schemas are imported into Affirma as ontologies. This carries significant advantages due to the flexible nature and high maintainability of ontologies. The ontologies can then be programmatically-analyzed to aid in the refinement of the enterprise model, and knowledge graphs can be built against them to gain a deeper understanding of the business data.

The Resource Description Framework (RDF) is a framework for representing and exchanging highly interconnected data on the web, developed and standardized with the World Wide Web Consortium (W3C). It is a general method for description and exchange of graph data. RDF provides a variety of syntax notations and data serialization formats.

\

Knowledge Organization and Sharing / SKOS

The Simple Knowledge Organization System (SKOS) is a W3C recommendation and part of the Semantic Web family of standards built upon RDF and RDFS. SKOS is a common data model for sharing and linking knowledge organization systems via the Semantic Web. It is designed for representation of classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary. Its main objective is to enable easy publication and use of such vocabularies as linked data.

\

Alignment of Ontologies / Notation3 & EYE Reasoner

Notation3 is a standardized way to express mappings (Alignment RDF) and data transformations between various ontologies. The resulting alignments can be used to present data lineage, simulate and project data transformations, generate mapping specifications and runtime code. It is on the Affirma development roadmap to standardize alignments in Notation3.

EYE Reasoner supports N3 Logic Inferencing/Reasoning. Reasoning is the powerful mechanism to draw conclusions from facts. EYE’s role is ensuring that linked data in one vocabulary can be easily transformed into another, thanks to explicit relations between those vocabularies. EYE is a reasoning engine supporting the Semantic Web layers, performing forward and backward chaining along Euler paths. A few alternatives to the EYE reasoner exist—most notably, the cwm, Jena, and Fuxi reasoners. However, EYE offers far superior performance

\

Access Policies and Governance / ODRL

Open Digital Rights Language. Widely used to manage access policies to resources. For Affirma, ODRL forms the foundation for an extensible and powerful governance model.

Mapping and Transformation

Data Linage

Data Profiling

Data Governance

Build Automation

SINGLE solution for enterprise semantic modeling and metadata management

Watch to see how Affirma harmonizes data integration and analytics for your organization across your data fabric or data mesh.

Incorporate data from vendors and industry standards.

Centralize semantic data model management, data mapping, data transformation, data lineage and more.

affirma logo


Let's Talk!

We would like to know more about you.

What would you like do with How Affirma Works?