Do you have multiple versions of the single version of the truth?
What if we didn’t have to search through all those varied, often-contradictory, deceptively
complex, mixed-legacy source systems? What if we could simply get to the authoritative
version of the data and feed it back to our operational systems and analytics applications?
What if we had a tool to automate all of this?
While data is usually shared across a company it is not necessarily integrated. In order to complete analysis, silo based data is aggregated, analysed and presented in multiple ways, which often leads to similar output detailing conflicting results. A centralised, managed metadata layer between the data sources and the consuming applications is needed to ensure that all business units are accessing and processing the same data with consistent formats and processes.
The types of data sources that can be integrated into the virtualised environment are essentially limitless. Internal or external, public or private, the data formats and models do not have to meet any specifications at the source, as the standardisation of data will occur in the platform, at the virtualised layer. The connections between data sources and the virtualised environment remain persistent. When the data-consuming applications query the data via the virtualised layer, the data will be integrated at that time and the results returned to the requesting application without the data ever being moved from the originating source.
This ensures that the consuming data applications have access to the most current data at all times. Because the metadata for data integration is implemented in the platform, it is insulated from changes made at the data sources, such as a change in or addition of a data type, or the addition of an entirely new data source. This eliminates hours or days, of coding and scripting changes that traditional ETL methods required. The ability of the platform to integrate data from anywhere at any time is simply something traditional ETL could not provide.
Virtual views provide a standardised, logical representation of the data based on business needs. They can be built for individual business needs and can be changed as the business needs change. Changes made to the source or sources of data that constitute a view can be made without affecting what the consuming applications see or use from the defined view. In other words, data abstraction allows for the consuming applications and the data sources to operate independently. The custom views also support data governance efforts as they are in a centralised location for the application of data validation and data quality rules.
The data virtualisation platform provides and supports many security features. Permissions can be granted or denied on many levels, including database, catalog, schema, table and column. Row-level security is also supported for all federated data sources. User and rolebased authentication and authorisation allow security policies and settings to be applied in accordance with business requirements. In addition to security controls, data masking allows further compliance controls, especially with highly sensitive data – such as financial, medical or personally identifying information (PII) – from being seen or read, even if it is passed from the data source. Because the rules and security are implemented in the platform, they provide reusable, repeatable processes that will be implemented as new data sources are added, allowing for immediate, secure scalability.
The fraXses platform has built-in optimisation capabilities to speed up data access and query processing. The increased speed to access and process data allows for existing business solutions to run in a more optimised environment, saving money and providing the ability to generate new business opportunities faster. Cached queries can be targeted to a specific database or to in-memory storage. With cached queries, processing performance is improved because data joins and the results cache can reside completely in any database, or directly in memory, and can be optimised with push-down query executions.
Applications throughout the business that need access to enterprise data, will access the platform through standard Web services – not the underlying data sources. By not having to program and code every report, service and application to access data for every underlying data source, business and IT resources can be made available for other initiatives. The streamlined access to enterprise data significantly reduces coding errors that put the company at risk.
fraXses enables the business to utilise all the disparate sources of data, without the need for coding, or building a new data repository just to have a single source or truth. By utilising the metadata that is gathered from the data sources to create the rules and processes fraXses enables the business to treat all the different datasources as if they were a single source. By utilising the automated data relationship building algorithms it becomes a powerful tool in providing the elusive single source of truth.