Table of Contents
1. Introduction
This page provides a comprehensive view of the reference architecture of the Data Ocean solution. It offers insights into the high-level block architecture diagram, the key components involved, and their interactions.
Understanding the reference architecture is crucial for gaining a holistic understanding of how the Data Ocean operates and supports the organization's data analytics initiatives.
2. Overview of the Data Ocean Reference Architecture
2.1. What is Reference Architecture?
Reference Architecture serves as a blueprint that outlines the structure and components of a system or solution.
In the context of the Data Ocean, the reference architecture provides a bird's eye view of the system's design and the relationships between its various components.
2.2. Benefits of Understanding the Reference Architecture
Understanding the reference architecture of the Data Ocean is vital for several reasons:
- It helps stakeholders visualize the overall system design and its components.
- It facilitates effective communication and collaboration between technical and non-technical stakeholders.
- It enables better decision-making regarding system enhancements, scalability, and integration with other systems.
- It serves as a foundation for future architectural decisions and system evolution.
2.2.1. Benefits
Implementing the Reference Architecture offers numerous benefits for organizations:
Scalability:
The architecture is designed to scale seamlessly as data volumes grow, allowing organizations to accommodate increasing data demands without compromising performance.
Data Quality:
The architecture includes robust data curation processes, ensuring that the ingested data is accurate, consistent, and of high quality.
Data Security:
The architecture incorporates data security measures to protect sensitive data and ensure compliance with regulatory requirements.
Historization:
The architecture supports the storage and management of historical data, enabling organizations to analyze and understand data trends over time.
Maintainability:
By adhering to standardized design patterns and guidelines, the architecture facilitates the maintenance and management of the Data Ocean solution.
Optimization:
The architecture incorporates optimization techniques such as data partitioning, indexing, and compression to improve storage efficiency and query performance.
Governance:
The architecture provides governance mechanisms to enforce data standards, data lineage, and data access controls, ensuring data integrity and compliance.
3. High-Level Block Architecture Diagram
The high-level block architecture diagram (Figure) provides an overview of the Data Ocean's key components and their interactions.
It showcases the major building blocks of the system and illustrates how data flows through the various stages of ingestion, processing, and serving.
Fig: The Data Ocean vision is materialized on the Data Platform validated in the Apollo Project.
4. Key Components of the Data Ocean Solution
4.1. Data Sources (Outside of the Reference Architecture)
Data sources in a corporate data and analytics solution primarily comprise operational applications that directly support the business. These sources can include internal systems like SAP and CRM systems, as well as internal files and other systems.
Additionally, external databases, websites, APIs, web scraping, JSON, XML files, and other internal or external files may also serve as data sources within this context.
Business analysis and source system analysis play a crucial role in the success of the Data Ocean solution.
- Business analysis is a vital aspect of the Data Ocean initiative, ensuring a deep understanding of the organization's business processes and objectives. Business analysts collaborate with stakeholders to identify data requirements, develop data models, and establish clear data flows.
- Source system analysis is an integral part of obtaining a comprehensive understanding of the supporting source data model. It involves examining relationships, keys, and cardinalities within operational applications. Data profiling and quality assessments are conducted to ensure the reliability and accuracy of the data, enforcing data governance, and contributing to the continuous improvement of the Data Ocean architecture.
Combined business analysis and Source system analysis, empowers the organization to align their data requirements with business needs and optimize data integration into the Data Ocean.
4.2. Data Consumers (Outside of the Reference Architecture)
Data consumers refer to the various BI tools, dashboard applications, and analytic applications that utilize the data within the Data Ocean.
These tools are outside the scope of the reference architecture but play a crucial role in data consumption and analysis.
The inclusion of a semantic layer could significantly enhance data adoption by providing a unified and standardized view of data across the organization.
4.3. Data Capturing
Data Capturing and Ingestion block is a crucial component of the Data Ocean solution, responsible for collecting and ingesting data from diverse sources into the system.
This process involves extracting data from source systems, managing files, and loading them onto Cloud Storage for efficient storage management. It plays a vital role in ensuring the availability and accessibility of data within the Data Ocean.
It involves two primary approaches: batch processing and streaming processing.
4.3.1. Batch Processing
In a traditional company, batch processing is often the more common approach for data capturing. It involves collecting and processing data in large volumes at scheduled intervals.
Batch processing is well-suited for scenarios where data can be collected over a period of time and doesn't require real-time analysis.
Use cases for batch processing in a traditional business might involve analyzing sales data, customer demographics, inventory levels, or financial transactions. These use cases often rely on historical data and trends to inform strategic decision-making, as immediate insights are not as critical.
Examples of batch data sources include end-of-day extracts and integration of internal files, and relational databases.
Batch processing offers numerous benefits, including:
Scalability:
Batch processing is well-suited for handling large volumes of data efficiently, allowing for the processing of substantial data sets without performance degradation.
Cost-effectiveness:
By consolidating data from multiple sources and processing it in batches, organizations can reduce the need for real-time infrastructure, resulting in cost savings.
Simplified Data Integration:
Batch processing enables the integration and transformation of data from diverse sources. This ensures consistency and accuracy by harmonizing data formats and structures.
Efficient Resource Utilization:
Batch processing optimizes the utilization of system resources by scheduling data processing tasks during off-peak hours. It minimizes the impact on source systems and avoids overloading them during critical periods.
Extraction Window Management:
With batch processing, organizations can define specific extraction windows to extract data from source systems. This allows for better control and management of data extraction processes.
Failure and Restart Support:
Batch processing frameworks often provide robust mechanisms for handling failures and facilitating restarts. In case of any interruptions or errors during processing, the system can resume from the point of failure, ensuring data integrity and reliability.
Overall, batch processing offers a cost-effective, scalable, and efficient approach for handling large volumes of data, simplifying data integration, optimizing resource usage, managing extraction windows, and supporting failure recovery.
4.3.2. Streaming Processing
While batch processing is common in traditional organizations, streaming processing has gained popularity with the rise of real-time data analytics and the Internet of Things (IoT).
Streaming processing is a data processing approach that involves capturing and analyzing data in real-time or near real-time as it is generated. This method is well-suited for situations that require immediate insights and responses, such as real-time monitoring, fraud detection, or predictive maintenance.
Streaming processing is particularly beneficial for applications that require monitoring and control of production lines and the factory floor, enabling timely actions and optimizations. It allows for the continuous analysis of data streams, facilitating rapid decision-making and proactive measures in industrial environments.
Streaming processing offers several advantages, including:
- Real-time Insights:
- Streaming data allows for timely analysis and decision-making, enabling organizations to respond quickly to changing conditions.
- Continuous Data Processing:
- Streaming processing handles data as it arrives, ensuring continuous data processing and reducing latency in data availability.
- Event-Driven Architecture:
- Streaming processing enables the detection and response to specific events or triggers, providing proactive insights and actions.
Examples of streaming data sources include IoT sensors, social media feeds, clickstream data, and real-time transaction data. In a traditional business, streaming processing might be applied to monitor production lines, track supply chain logistics, or identification of predictive maintenance in real-time.
It's important to note that while streaming processing offers real-time insights, not all business processes and teams require this level of immediacy.
For many businesses, end-of-day extracts and batch processing can often provide sufficient data for their needs. This approach is particularly useful for monitoring long-term trends and making adjustments to long-term strategies. By analyzing data in batches, organizations can gain insights into the overall performance and trends over time, allowing them to make informed decisions and adapt their strategies accordingly. This method provides a comprehensive view of the business, enabling effective monitoring and adjustment of long-term goals.
The choice between batch and streaming processing depends on the specific business needs and the importance of real-time insights in driving decision-making processes.
4.4. Lake House
Includes the following Components
4.4.1. Storage
The Data Storage block in the Data Ocean serves as a repository for housing the raw data captured from various sources. It preserves the data in its original format (before it is integrated and transformed), enabling future integration and transformation. With its scalable storage capacity, it can accommodate and handle large volumes of raw data while ensuring data fidelity and security.
The raw data stored in the Data Storage block can come from various sources, such as operational systems, external data feeds, APIs, files, or streaming data sources, and it may include structured, unstructured, and semi-structured data.
The Data Storage block is a crucial component within the Data Ocean solution as it is responsible for maintaining raw data integrity and availability throughout the entire data lifecycle. Its primary function is to securely store the raw data, making it easily accessible and ready for subsequent processing and analysis. Additionally, the cloud-based storage solutions support the Data Lakehouse approach, allowing for direct analysis of the stored data, by combining the benefits of both data lakes and data warehouses, without the need for extensive transformation or pre-defined schemas. This flexibility and scalability empower organizations to leverage the full potential of their data, uncover hidden patterns, and make data-driven decisions that drive business success.
In the context of the Data Ocean, it is essential to recognize that after the raw data is stored in the Data Storage block, it undergoes subsequent processing stages. These stages, which take place in separate components or blocks within the Data Ocean solution, include data integration, transformation, and normalization. These processes refine the raw data, ensuring its quality and consistency, and prepare it for further analysis and consumption. By going through these subsequent stages, the data becomes more structured and suitable for effective analysis and utilization within the Data Ocean framework.
The storage component involves the management and organization of data within the Data Ocean. It encompasses the following:
- Scalable cloud-based storage solutions to accommodate large volumes of data.
- Cloud storage solutions provide virtually unlimited scalability, allowing organizations to store and manage vast amounts of data without worrying about capacity limitations.
- Durability: cloud-based storage solutions offers high durability, ensuring that data is securely stored and protected against hardware failures or data corruption.
- It uses redundant storage mechanisms to maintain data integrity.
- Accessibility: Cloud storage provides easy and convenient access to data from anywhere with an internet connection.
- Cloud storage solutions usually offer robust APIs and integrations that enable seamless data access and retrieval for applications and services.
- Cost-effectiveness: Cloud storage offers cost advantages over traditional on-premises storage solutions.
- With pay-as-you-go pricing models, organizations only pay for the storage capacity they actually use, avoiding upfront hardware investments and reducing operational costs.
- Data Security: Cloud storage platforms prioritize data security and provide built-in features such as encryption at rest and in transit, access control mechanisms, and audit logs.
- These measures help protect sensitive data and ensure compliance with privacy and security regulations.
- Optimizing data storage: involves employing techniques such as data compression, storage tiering (base on the definition of data retention periods), and lifecycle management, which collectively aim to reduce storage costs, maximize storage utilization, and ensure efficient data availability and accessibility.
- Data compression reduces storage requirements by compressing data, while
- storage tiering involves categorizing data based on access frequency and moving it to appropriate storage tiers,
- based on the duration for which data should be retained and preserved before it is considered no longer necessary or relevant for business purposes.
- The specific retention period can vary depending on regulatory requirements, compliance, industry standards, and organizational policies or simply efficient data management practices.
- Lifecycle management automates data movement and deletion based on predefined rules, ensuring efficient storage usage.
4.4.2. Curation
The curation component of the Data Ocean solution encompasses various activities aimed at transforming, enriching, and preparing raw data for further analysis.
It includes the following key elements:
- Data quality checks: This involves conducting thorough assessments to ensure the accuracy, consistency, and reliability of the data.
- By implementing data validation techniques, organizations can identify and rectify any data anomalies or inconsistencies, ensuring the integrity of the data.
- Data validation: Applying validation rules and checks to ensure the accuracy, completeness, and consistency of the data, verifying that it meets predefined criteria and conforms to expected formats, structures, and business rules.
- Data cleansing processes: Duplicates, errors, and inconsistencies in the data can hinder accurate analysis.
- Data cleansing techniques are applied to remove such issues and ensure the data is clean, complete, and free from any redundancies or errors.
- Data standardization techniques: Data often originates from different sources with varying formats and structures.
- Standardization techniques are employed to transform the data into a consistent format, making it easier to integrate, compare, and analyze across different datasets.
- Data enrichment: To enhance the value and context of the data, integration with external sources and data augmentation techniques are applied. This process involves incorporating additional information, such as external data sets or third-party data sources, to enrich the existing data and provide a more comprehensive view for analysis.
By incorporating these curation activities, the Data Ocean aims to ensure that the data is of high quality, reliable, and well-prepared for subsequent analysis and decision-making processes.
For more detail, please read the Data Curation chapter.
4.4.3. Provisioning
The provisioning component in the Data Ocean architecture focuses on ensuring the accessibility and usage of integrated, curated, and consumption-ready data.
It takes a use case-driven approach, allowing the organization to tailor data provisioning strategies to meet specific needs, requirements and objectives. This includes exposing optimized data structures and processing methods that are tailored to specific analytical needs or use cases. This approach facilitates efficient data consumption, exploration, and analysis, allowing the organization to address diverse business challenges and make data-driven decisions.
The generic use cases of provisioning, include batch processing, real-time processing, and optimized data storage.
- Use Case: Batch Processing
- Batch processing is a common approach for handling large volumes of data in a traditional organization.
- It involves processing data in predefined batches, typically during scheduled intervals or at the end of the day.
- Batch processing offers several benefits, as discussed before (Data Capturing\Batch Processing)
- Use Case: Real-time Processing
- Real-time processing, on the other hand, involves capturing and processing data as it is generated, providing immediate insights and responses.
- While traditional organizations may not have extensive real-time processing requirements, there are scenarios where real-time insights can be valuable.
- Some possible use cases include:
- Factory Floor Monitoring and Control: Real-time processing can be applied to monitor and control processes on the factory floor, enabling timely responses to potential issues and optimizing production efficiency.
- IoT Data Analysis: With the rise of the Internet of Things (IoT), organizations can leverage real-time processing to analyze sensor data and derive actionable insights. For example, monitoring equipment health in real-time to detect anomalies or predicting maintenance requirements based on real-time sensor readings.
- Use Case: Optimized Data Storage
- The Data Storage block within the Data Ocean architecture plays a critical role in storing integrated and transformed data.
- It can include data warehouses, data lakes, or a combination of both, depending on the organization's data architecture strategy.
- Optimized data storage offers benefits such as:
- Data Exploration and Analysis: By storing raw and normalized data in a central repository, organizations can explore and analyze data from different sources, gaining valuable insights and driving informed decision-making.
- Data Integration: Optimized data storage facilitates the integration of diverse data sources, enabling comprehensive analysis and a holistic view of organizational data.
- Scalability and Flexibility: With a scalable data storage solution, organizations can accommodate the ever-growing volume, velocity, and variety of data, ensuring the ability to handle future data needs.
- Data Governance and Security: By implementing proper data governance practices and security measures, organizations can ensure data integrity, privacy, compliance, and ethical use.
4.4.3.1. Domains and Data Products
The Data Ocean architecture includes pre-determined provisions for two distinct use cases: the Domain data layer and the Data Product.
4.4.3.1.1. Domain
The Domain data layer in the Data Ocean architecture serves as a centralized, reliable, and authoritative data source, that is subject-oriented, data-oriented, integrated, time-variant, and nonvolatile. It serves as a foundational layer that ensures data consistency, reliability, and governance across the organization.
This domain-specific approach enables structured analysis, decision-making, and reporting, making it ideal for standardized and repeatable processes.
It adheres to the internal organizational structure and principles of Domain-Driven Design (DDD).
Major characteristics:
- By adopting a Domain-oriented approach, the Domain data layer aims to capture and represent the core concepts, relationships, and business rules specific to each domain within the organization.
- With a focus on centralization, this data layer ensures that all relevant data related to a specific domain is consolidated and made easily accessible.
- By being reliable and authoritative, it establishes a trusted source of data that stakeholders can rely on for decision-making and analysis.
- The Domain data layer supports the organization's internal structure and promotes a shared understanding of the data within different domains.
- By adhering to DDD principles, the Domain data layer enables the organization to effectively model and organize its data based on the real-world business domains.
- This approach facilitates better data management, encourages collaboration, and enhances the overall data quality. It provides a solid foundation for data integration, data governance, and consistent data representation across different applications and systems.
In summary, the Domain data layer in the Data Ocean architecture serves as a centralized, reliable, and authoritative data source, aligning with the internal organization and adhering to the principles of Domain-Driven Design. It ensures that data related to each domain is consolidated, promotes a shared understanding of the data, and facilitates effective data management and integration within the organization.
4.4.3.1.2. Data Product
On the other hand, the Data Product use case emphasizes a more exploratory and iterative approach to data exploration and analysis. It is driven by user-defined requirements and focuses on delivering specific insights and solutions tailored to the needs of different stakeholders.
Data Products are designed to be more flexible and adaptable, accommodating evolving business needs and user preferences. They may be more volatile in nature, depending on the continuous interest and relevance of the insights they provide, and can be decommissioned when they no longer serve their purpose.
Data Products should focus more on performance, simplicity and user accessibility
4.4.3.2. Conclusion
Data Provisioning is about building the Data Models to support the Domain and the Data Products
it includes:
- Data modeling and schema design to define the structure of Domain Data Models.
Prioritising Data Integrity, Flexibility, and Resilience:
- The emphasis should lie on key aspects such as data integrity, flexibility, support for full historical information, and a data-oriented approach.
- The focus is on ensuring accurate representation of relationships, maintaining data consistency, and organizing data in a more normalized fashion (not necessarily full normalisation), potentially leaning towards a snowflake-style structure rather than a star-schema approach.
- By adopting this data-oriented mindset, the modeling process prioritizes the long-term resilience and reliability of the data, rather than being solely driven by immediate user requirements, simplicity, or performance considerations.
- The goal is to create a robust and adaptable foundation that allows for comprehensive data analysis, informed decision-making, and effective utilization of the data assets.
- Creation of data marts (or One Big Table design) tailored to specific business needs and user requirements.
- Prioritising Performance, Simplicity and alignment with business needs and business priorities
- Tailored data marts are created in the Data Ocean architecture to meet specific business needs, user requirements and goals of the business, enabling effective data analysis
- Prioritize data denormalization for simplicity and performance, providing a star-schema model or consolidating relevant data into a single table structure for efficient data retrieval and analysis.
- This approach ensures that data processing and analysis are optimized for speed and efficiency, while also maintaining a simple and user-friendly data structure.
- Prioritising Performance, Simplicity and alignment with business needs and business priorities
- Implementation of efficient data access mechanisms for fast and seamless data retrieval.
- Integration with analytical tools and platforms for advanced analytics and reporting.
By incorporating both the Domain data layer and the Data Product use cases, the provisioning component of the Data Ocean architecture provides a comprehensive solution that meets the diverse data management needs of the organization. It enables a centralized, reliable, and authoritative data layer for structured analysis and decision-making while also facilitating the development of agile and adaptable data products that support exploratory and iterative approaches to uncovering insights and driving innovation.
-------------
The design patterns, organization, and standards enforced by the Lake House Architecture are crucial for achieving the desired scalability, reliability, and performance of the Data Ocean solution. By following these guidelines, organizations can ensure a solid foundation for their data management processes, enabling seamless data integration, advanced analytics, and data-driven decision-making.
Additionally, the Lake House Architecture addresses important aspects such as data quality and data security. Through its standardized processes and governance mechanisms, the architecture ensures that data is validated, cleansed, and secured, minimizing the risk of errors, inconsistencies, and unauthorized access.
Overall, the Reference Architecture provides a comprehensive blueprint for organizations to establish a robust and scalable data management solution. By adhering to the design patterns, organization, and standards set forth by the architecture, organizations can unlock the full potential of the Data Ocean and leverage data as a strategic asset for driving business growth and innovation.
---
Data Ingestion Layer
The data ingestion layer (Figure 1) consists of components responsible for collecting data from diverse sources and bringing it into the Data Ocean. It includes connectors, data pipelines, and integration tools that facilitate data acquisition and transformation.
Standardization and clear guidelines are essential for the success of the Lake House Architecture.
By establishing design patterns, organizational structure, and standards, the architecture ensures that the data solution is maintainable, scalable, optimized, well-governed, easily accessible, and leveraged for organizational advantage.
The architecture encompasses three key components: Curation, Storage, and Provisioning. By following the guidelines and best practices outlined in this reference architecture, the projects and initiatives can ensure the success of their Data Ocean implementation.
Conclusion
The Reference Architecture provides organizations with a comprehensive and scalable framework for building their Data Ocean solution. By following the guidelines and best practices outlined in this reference architecture, organizations can ensure data quality, security, and scalability, while enabling advanced analytics and data-driven decision-making. The architecture's modular and flexible nature allows for customization and adaptation to meet specific business requirements.
