Home page  

Help > SmartData Fabric® DevOps, Deployment and Maintenance >

DevOps and Guidelines

Version 8.0.0.490

WhamTech SmartData Fabric® Solution - Development to Production Process Guidelines.. 1

Development: 1

Testing: 2

Staging: 3

Production: 3

Cross-Department/Organization Federation Deployment Management per Data Mesh and APIs as Products Mindset 3

WhamTech SmartData Fabric® Solution - Development to Production Process Guidelines

As with any enterprise software solution, you should follow established software release management guidelines when you develop and release a WhamTech SmartData Fabric® solution. This process should include the following stages:

  • Development
  • Testing
  • Staging
  • Production

Ideally, you should complete each stage in the release management process in a discrete environment, separate from the other environments. Realistically, you may have to combine one or more of the environments due to hardware, time, or other resource constraints. At a bare minimum you should separate the production environment from the other environments.

 

Many of WhamTech SmartData Fabric® components have APIs for integration into DevOps environments, and automation scripts should be considered to streamline and improve the process.

 

Modern datacenter and cloud platforms provide several built-in features that help with configuring and instantiating the environments for each of the phases as needed. The features such as software configurable automatic backup/restore, high availability, load balancing, resource scaling, tiered storage etc. ease this process considerably.

Development:

In this phase, the main focus is to understanding the data sources, data source connectors, schema/data model, data profiling, data quality, transforms, common data model, standard data view mapping, MDM, and CDC/Polling options, and build data source adapter and federation configuration(s) per solution requirements.

 

Systems Configuration Considerations:

-          A virtual machine (VM) with WhamTech SmartData Fabric® software pre-installed can be used as a basis for creating multiple development machines for multiple developers to work in parallel on different data sources

-          For development, sever machines may not be needed (Windows 11/Linux desktop OS machines may suffice)

User access for Developers: Development users require flexibility to explore various options during the development phase and should be provided with sufficient access privileges on the development system(s) and data sources.

 

Multiple adapters on a single machine: It may be sufficient to use a single machine for configuring multiple adapters and federation server or use additional machines as needed. At this time the following limits apply:

-          A single instance of WhamTech EIQ Server per machine

-          A single configuration client per EIQ Server at a time

Data sources: Development phase may use development/test data sources with simulated/generated data that mimics actual production data source data as much as possible; having access to actual data will help with data cleansing/standardization process.

 

Output artifacts: The following items generated as part of development phase can be reused in next phases with some modifications.

-          RTI Maps/transforms for data standardization and cleansing

-          Adapter configuration/common business model mapping

-          Federation, MDM configuration, client application queries

-          CDC/Polling configuration to update indexes

-          Sizing parameters for hardware/systems configuration for test/staging/production environment

Artifacts repository: These output artifacts along with WhamTech software should be saved for reuse and maintained in a repository/folder for efficiency, traceability and recoverability. Saving images of VMs should also be considered for easy replication/recoverability.

Testing:

In this phase, the environment is setup with specific tests in mind. These typically include tests for functionality, security, access control, performance and scale.

 

VMs and artifacts generated during development phase can be re-used/re-purposed/applied in the new environment with some modifications suitable for the new environment. For example, data sources adapters may need to point to a different data source with larger dataset.

 

Performance and Scale tests: These tests typically require setting-up an environment different from development environment. For example, each of the adapters and federation servers need dedicated machines with configurations that are appropriate for production-level performance and scale under target loads.

 

Back-up/recovery scenario tests: These tests include scenarios for backing-up configurations/indexes etc. for recovering/restoring after a planned or unexpected outage. These tests should cover various boundary situations and would help improve the reliability and robustness of the solution.

 

Process automation: Automating various tasks in testing, staging and production phases using scripting should be considered to improve efficiency, reliability etc.

 

Output artifacts: The following items generated as part of testing phase can be reused in next phases with some modifications.

-          RTI Maps, transforms for various test scenarios

-          Adapter configuration, common business model mapping

-          Federation, MDM configuration, client application queries

-          CDC/Polling configuration to update indexes

-          Sizing parameters for hardware/systems configuration for staging/production environment

-          Load parameters, performance metrics

-          Security and other compliance parameters

-          Test scripts

-          Automation scripts

-          Input for staging and production systems specification

Artifacts repository: These output artifacts along with WhamTech software should be saved for reuse and maintained in a repository/folder for efficiency, traceability and recoverability. Saving images of VMs should also be considered for easy replication/recoverability.

Staging:

Staging environments should be setup to mimic production environment, and, in some cases, prepared to be switched into production.

 

Sizing, load, performance and other metrics collected in the previous phases should be used to configure the systems for staging and production environments.

 

In this phase, data source adapters are built from scratch against full data sets. Often, the data sources used for building adapters are replicated versions of original production systems. In some cases, even production data sources are used. Federation and MDM are built against these adapters. Testing by end users can start in this phase.

 

Apart from functionality, load and performance verification, plans for backup/recovery/restore, software/systems upgrade, and other scenarios such as adding new data source adapters to an existing solution should be reviewed and addressed in preparation for deploying into production. Any enterprise requirements for compliance etc. should also be addressed at this phase before deploying in production.

 

Automation: Refine automation scripts covering various scenarios including for building, configuring, testing and verification.

 

Snapshots of artifacts and adapter indexes: Configuration artifacts and adapter indexes should be saved periodically, the frequency of which is determined by availability requirements. In many data center and cloud environments, snapshots of VMs containing the artifacts/adapter indexes can be saved to simplify the process.

 

Re-sync’ing adapter indexes with data source changes after an outage: Adapter indexes may go out of sync with data sources under certain circumstances (when index update process is stopped for any reason). In such cases, data source changes after an outage can be applied to most recent snapshot before bringing it online.

Production:

The production environment usually setup similar to the staging environment. In this phase, the solution is deployed against production data sources, and is called by production applications used by end users.

 

Plans should be developed and tested for outages and automatic switching back to a backup environment.

 

There can be an initial gradual roll-out plan for the first time that involves limiting access to a certain type of users, certain applications, etc.

Cross-Department/Organization Federation Deployment Management per Data Mesh and APIs as Products Mindset

 

Adapters/federation servers are managed as a service for providing data as a product to consumers of data.

 

Managing adapter/federation data as product:

           

Strategize and design the product:

What it is, intended consumers, stakeholders, vision, objectives, roadmap, budget, etc.

 

Define the product:

Version, release date, expected end-of-life date, etc.

           

Standard Data Dictionary/Business Dictionary and Industry Model – Shared and agreed upon

 

Business Views Definition

                       

Service Level Agreement

§  Availability

§  Responsiveness

§  Usage parameters: Number of users/queries/data records/size

 

Support and maintenance parameters

Pricing

Change protocols and notification of changes to the service/data

 

Documentation:

Of data product, usage, guidelines

 

Promote/Publish:

In enterprise data catalogs

Marketplace

 

Access:

Direct DB connection/Data service APIs

Setup domain/proxy user/authentication/access rights

 

In cases where customer requirements and usage is uncertain, start with a Minimum Viable Product (MVP), and through iterations, mature the service.

 

Copyright © 2026 , WhamTech, Inc.  All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. Names may be trademarks of their respective owners.