Table of Contents
This page provides a comprehensive view of the reference architecture of the Data Ocean solution. It offers insights into the high-level block architecture diagram, the key components involved, and their interactions.
Understanding the reference architecture is crucial for gaining a holistic understanding of how the Data Ocean operates and supports the organization's data analytics initiatives.
Reference Architecture serves as a blueprint that outlines the structure and components of a system or solution.
In the context of the Data Ocean, the reference architecture provides a bird's eye view of the system's design and the relationships between its various components.
Understanding the reference architecture of the Data Ocean is vital for several reasons:
Implementing the Reference Architecture offers numerous benefits for organizations:
Scalability:
The architecture is designed to scale seamlessly as data volumes grow, allowing organizations to accommodate increasing data demands without compromising performance.
Data Quality:
The architecture includes robust data curation processes, ensuring that the ingested data is accurate, consistent, and of high quality.
Data Security:
The architecture incorporates data security measures to protect sensitive data and ensure compliance with regulatory requirements.
Historization:
The architecture supports the storage and management of historical data, enabling organizations to analyze and understand data trends over time.
Maintainability:
By adhering to standardized design patterns and guidelines, the architecture facilitates the maintenance and management of the Data Ocean solution.
Optimization:
The architecture incorporates optimization techniques such as data partitioning, indexing, and compression to improve storage efficiency and query performance.
Governance:
The architecture provides governance mechanisms to enforce data standards, data lineage, and data access controls, ensuring data integrity and compliance.
The high-level block architecture diagram (Figure) provides an overview of the Data Ocean's key components and their interactions.
It showcases the major building blocks of the system and illustrates how data flows through the various stages of ingestion, processing, and serving.

Fig: The Data Ocean vision is materialized on the Data Platform validated in the Apollo Project.
Data sources in a corporate data and analytics solution primarily comprise operational applications that directly support the business. These sources can include internal systems like SAP and CRM systems, as well as internal files and other systems.
Additionally, external databases, websites, APIs, web scraping, JSON, XML files, and other internal or external files may also serve as data sources within this context.
Business analysis and source system analysis play a crucial role in the success of the Data Ocean solution.
Combined business analysis and Source system analysis, empowers the organization to align their data requirements with business needs and optimize data integration into the Data Ocean.
Data consumers refer to the various BI tools, dashboard applications, and analytic applications that utilize the data within the Data Ocean.
These tools are outside the scope of the reference architecture but play a crucial role in data consumption and analysis.
The inclusion of a semantic layer could significantly enhance data adoption by providing a unified and standardized view of data across the organization.
Data Capturing and Ingestion block is a crucial component of the Data Ocean solution, responsible for collecting and ingesting data from diverse sources into the system.
This process involves extracting data from source systems, managing files, and loading them onto Cloud Storage for efficient storage management. It plays a vital role in ensuring the availability and accessibility of data within the Data Ocean.
It involves two primary approaches: batch processing and streaming processing.
In a traditional company, batch processing is often the more common approach for data capturing. It involves collecting and processing data in large volumes at scheduled intervals.
Batch processing is well-suited for scenarios where data can be collected over a period of time and doesn't require real-time analysis.
Use cases for batch processing in a traditional business might involve analyzing sales data, customer demographics, inventory levels, or financial transactions. These use cases often rely on historical data and trends to inform strategic decision-making, as immediate insights are not as critical.
Examples of batch data sources include end-of-day extracts and integration of internal files, and relational databases.
Batch processing offers numerous benefits, including:
Scalability:
Batch processing is well-suited for handling large volumes of data efficiently, allowing for the processing of substantial data sets without performance degradation.
Cost-effectiveness:
By consolidating data from multiple sources and processing it in batches, organizations can reduce the need for real-time infrastructure, resulting in cost savings.
Simplified Data Integration:
Batch processing enables the integration and transformation of data from diverse sources. This ensures consistency and accuracy by harmonizing data formats and structures.
Efficient Resource Utilization:
Batch processing optimizes the utilization of system resources by scheduling data processing tasks during off-peak hours. It minimizes the impact on source systems and avoids overloading them during critical periods.
Extraction Window Management:
With batch processing, organizations can define specific extraction windows to extract data from source systems. This allows for better control and management of data extraction processes.
Failure and Restart Support:
Batch processing frameworks often provide robust mechanisms for handling failures and facilitating restarts. In case of any interruptions or errors during processing, the system can resume from the point of failure, ensuring data integrity and reliability.
Overall, batch processing offers a cost-effective, scalable, and efficient approach for handling large volumes of data, simplifying data integration, optimizing resource usage, managing extraction windows, and supporting failure recovery.
While batch processing is common in traditional organizations, streaming processing has gained popularity with the rise of real-time data analytics and the Internet of Things (IoT).
Streaming processing is a data processing approach that involves capturing and analyzing data in real-time or near real-time as it is generated. This method is well-suited for situations that require immediate insights and responses, such as real-time monitoring, fraud detection, or predictive maintenance.
Streaming processing is particularly beneficial for applications that require monitoring and control of production lines and the factory floor, enabling timely actions and optimizations. It allows for the continuous analysis of data streams, facilitating rapid decision-making and proactive measures in industrial environments.
Streaming processing offers several advantages, including:
Examples of streaming data sources include IoT sensors, social media feeds, clickstream data, and real-time transaction data. In a traditional business, streaming processing might be applied to monitor production lines, track supply chain logistics, or identification of predictive maintenance in real-time.
It's important to note that while streaming processing offers real-time insights, not all business processes and teams require this level of immediacy.
For many businesses, end-of-day extracts and batch processing can often provide sufficient data for their needs. This approach is particularly useful for monitoring long-term trends and making adjustments to long-term strategies. By analyzing data in batches, organizations can gain insights into the overall performance and trends over time, allowing them to make informed decisions and adapt their strategies accordingly. This method provides a comprehensive view of the business, enabling effective monitoring and adjustment of long-term goals.
The choice between batch and streaming processing depends on the specific business needs and the importance of real-time insights in driving decision-making processes.
Includes the following Components
The Data Storage block in the Data Ocean serves as a repository for housing the raw data captured from various sources. It preserves the data in its original format (before it is integrated and transformed), enabling future integration and transformation. With its scalable storage capacity, it can accommodate and handle large volumes of raw data while ensuring data fidelity and security.
The raw data stored in the Data Storage block can come from various sources, such as operational systems, external data feeds, APIs, files, or streaming data sources, and it may include structured, unstructured, and semi-structured data.
The Data Storage block is a crucial component within the Data Ocean solution as it is responsible for maintaining raw data integrity and availability throughout the entire data lifecycle. Its primary function is to securely store the raw data, making it easily accessible and ready for subsequent processing and analysis. Additionally, the cloud-based storage solutions support the Data Lakehouse approach, allowing for direct analysis of the stored data, by combining the benefits of both data lakes and data warehouses, without the need for extensive transformation or pre-defined schemas. This flexibility and scalability empower organizations to leverage the full potential of their data, uncover hidden patterns, and make data-driven decisions that drive business success.
In the context of the Data Ocean, it is essential to recognize that after the raw data is stored in the Data Storage block, it undergoes subsequent processing stages. These stages, which take place in separate components or blocks within the Data Ocean solution, include data integration, transformation, and normalization. These processes refine the raw data, ensuring its quality and consistency, and prepare it for further analysis and consumption. By going through these subsequent stages, the data becomes more structured and suitable for effective analysis and utilization within the Data Ocean framework.
The storage component involves the management and organization of data within the Data Ocean. It encompasses the following:
The curation component of the Data Ocean solution encompasses various activities aimed at transforming, enriching, and preparing raw data for further analysis.
It includes the following key elements:
By incorporating these curation activities, the Data Ocean aims to ensure that the data is of high quality, reliable, and well-prepared for subsequent analysis and decision-making processes.
For more detail, please read the Data Curation chapter.
The provisioning component in the Data Ocean architecture focuses on ensuring the accessibility and usage of integrated, curated, and consumption-ready data.
It takes a use case-driven approach, allowing the organization to tailor data provisioning strategies to meet specific needs, requirements and objectives. This includes exposing optimized data structures and processing methods that are tailored to specific analytical needs or use cases. This approach facilitates efficient data consumption, exploration, and analysis, allowing the organization to address diverse business challenges and make data-driven decisions.
The generic use cases of provisioning, include batch processing, real-time processing, and optimized data storage.
The Data Ocean architecture includes pre-determined provisions for two distinct use cases: the Domain data layer and the Data Product.
The Domain data layer in the Data Ocean architecture serves as a centralized, reliable, and authoritative data source, that is subject-oriented, data-oriented, integrated, time-variant, and nonvolatile. It serves as a foundational layer that ensures data consistency, reliability, and governance across the organization.
This domain-specific approach enables structured analysis, decision-making, and reporting, making it ideal for standardized and repeatable processes.
It adheres to the internal organizational structure and principles of Domain-Driven Design (DDD).
Major characteristics:
In summary, the Domain data layer in the Data Ocean architecture serves as a centralized, reliable, and authoritative data source, aligning with the internal organization and adhering to the principles of Domain-Driven Design. It ensures that data related to each domain is consolidated, promotes a shared understanding of the data, and facilitates effective data management and integration within the organization.
On the other hand, the Data Product use case emphasizes a more exploratory and iterative approach to data exploration and analysis. It is driven by user-defined requirements and focuses on delivering specific insights and solutions tailored to the needs of different stakeholders.
Data Products are designed to be more flexible and adaptable, accommodating evolving business needs and user preferences. They may be more volatile in nature, depending on the continuous interest and relevance of the insights they provide, and can be decommissioned when they no longer serve their purpose.
Data Products should focus more on performance, simplicity and user accessibility
Data Provisioning is about building the Data Models to support the Domain and the Data Products
it includes:
Prioritising Data Integrity, Flexibility, and Resilience:
By incorporating both the Domain data layer and the Data Product use cases, the provisioning component of the Data Ocean architecture provides a comprehensive solution that meets the diverse data management needs of the organization. It enables a centralized, reliable, and authoritative data layer for structured analysis and decision-making while also facilitating the development of agile and adaptable data products that support exploratory and iterative approaches to uncovering insights and driving innovation.
In the Data Ocean solution, Data Science and Machine Learning are pivotal in driving valuable insights and enabling advanced analytics.
These capabilities empower the organisation to strategically leverage data assets, harness the full potential of data resources, potentially enhance operational efficiency, and gain a competitive edge.
By harnessing the power of data, businesses can unlock new opportunities for growth, drive innovation, and make informed decisions that lead to improved business outcomes.
Data mining is the process of uncovering patterns, hidden relationships, trends, correlations, and valuable insights within extensive datasets. It encompasses a range of techniques and algorithms that enable the extraction of meaningful information from different types of data sources, structured, unstructured, or semi-structured.
Data mining involves activities such as data exploration, preprocessing, and visualisation, which contribute to the overall data mining process, to support the overall analysis.
Implementing Data Mining as part of the Data Ocean initiative can provide significant business advantages for the company.
Machine learning is a subset of data science that focuses on building algorithms and models that can learn from data and make predictions or take actions. It enables the Data Ocean to leverage the power of artificial intelligence and automation, allowing for the identification of patterns, anomalies, and trends in large volumes of data.
It covers model training, model evaluation, and model deployment processes, highlighting the integration of machine learning capabilities within the Data Ocean architecture. It can apply different machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning.
Data science is the practice of utilising scientific methods, algorithms, and statistical models to derive knowledge and valuable insights from data, enabling informed decision-making based on data. It encompasses activities such as data exploration, preprocessing, and modelling, utilising techniques like data mining and predictive analytics.
The complete lifecycle, including problem conceptualisation, data preparation, model creation, and evaluation, is covered by data science. The incorporation of data science frameworks and tools may also be explored in order to support sophisticated analytics and predictive modelling capabilities.
Data Science, Machine Learning, and Data Mining are interconnected fields that contribute to extracting knowledge and insights from data.
Data Mining involves the process of discovering patterns and extracting valuable information from large datasets. It focuses on uncovering hidden relationships, trends, and patterns that can provide meaningful insights.
Machine Learning, on the other hand, involves developing algorithms that enable computers to learn from data and make predictions or take actions without being explicitly programmed. It can be seen as a subset of Data Science, as it uses statistical techniques and algorithms to automatically learn from data and improve performance over time.
Data Science encompasses a broader range of techniques and methodologies for data analysis, including Data Mining, Machine Learning and Data Visualisation. Data Science provides the context, the methodology and the framework for analysing, understanding and interpreting data, while Data Mining techniques facilitate the extraction of valuable patterns from the data and Machine Learning algorithms automate learning and prediction tasks.
Data science, machine learning, and data mining work together to empower organisations to uncover hidden patterns, extract valuable insights, and solve complex problems using data assets. This enables better data-driven decisions, drives innovation, optimises processes, and identifies new opportunities across various domains.
By incorporating advanced capabilities into the Data Ocean initiative, the company can leverage advanced analytics and automation to gain a competitive edge, drive innovation, and unlock new business opportunities in manufacturing, scientific investigations, sales and other business or production areas.
Some examples and use cases to demonstrate how data mining, machine learning, and data science techniques can be applied in the manufacturing industry to drive operational improvements, enhance product quality, and make data-driven decisions.
Data Mining:
Machine Learning:
Data Science:
These are just a few examples of how data mining, machine learning, and data science can be applied in scientific investigation to gain new insights, make predictions, and drive advancements in various scientific fields.
Data Mining:
Machine Learning:
Data Science:
In the context of a Data Ocean, data management involves the strategic and organised handling of vast amounts of data from diverse sources. It encompasses data organisation, storage, protection, governance, and lifecycle management. Effective data management in a Data Ocean requires robust and efficient data governance supported in frameworks, security measures, architecture principles, integration, and quality management practices.
Following the principles outlined by the DAMA Body of Knowledge (DAMA BOK: a reference for practitioners, providing guidance on managing data in this complex environment, ensuring data accessibility, reliability, compliance, and maximising data value), data management involves activities such as data modelling, data architecture design, data governance, and metadata management. It focuses on establishing robust data lifecycle processes, and implementing comprehensive data management practices, put in place to ensure that data is accurate, consistent, and reliable, enabling informed decision-making, driving business innovation, and maximising the value derived from data assets.
The data catalog is an important component of the Data Ocean architecture, providing a comprehensive inventory of available data assets within the company. It serves as a centralised repository for metadata, allowing users to discover, understand, and access the data they need.
The data catalogue provides detailed information about the data sources, data models, data lineage, and other relevant attributes. It enables data governance and facilitates data discovery, promoting data reuse and reducing redundancy.
Data quality and workflow play a crucial role in ensuring the accuracy, consistency, and reliability of data within the Data Ocean.
It encompasses the processes, methodologies, and tools required to establish and maintain high-quality data throughout its lifecycle and that the Data Ocean solution operates with high-quality data.
Data quality focuses on establishing data quality standards, data profiling and assessment to understand the quality and characteristics of the data, data cleansing, and data validation processes to ensure accurate and reliable data. It involves understanding data characteristics, addressing anomalies through cleansing, and verifying data integrity.
Ongoing monitoring and improvement efforts are essential for maintaining data quality.
In summary, Data quality is key for a data-driven culture, reliable analytics, operational efficiency, and informed decision-making. By prioritising data quality, organisations empower stakeholders with trustworthy data, facilitating seamless workflows and enhancing decision-making capabilities. Robust data quality practices ensure data reliability and usability, enabling actionable insights and optimised operations.
Data orchestration refers to the coordination and management of various data processing tasks within the Data Ocean. It involves the use of orchestration tools and frameworks to automate and streamline data workflows. It covers the scheduling, sequencing, and dependency management of data processing tasks, ensuring efficient data movement and processing across different stages of the data pipeline.
Effective orchestration enhances the scalability, reliability, and performance of data processing operations.
Data audit involves tracking and monitoring data activities within the Data Ocean. It focuses on establishing robust audit trails, logging mechanisms, and monitoring tools to ensure data integrity and regulatory compliance. It includes the tracking of data access, data modifications, and data lineage, providing visibility into data usage and changes. Data audit enables organizations to identify and rectify data anomalies, maintain data governance, and adhere to data privacy and security regulations.
Operations in the context of our Data Ocean encompass the critical aspects of managing and maintaining the infrastructure, security, and performance of the data ecosystem with a focus on ensuring the smooth operation and reliability of the Data Ocean solution.
It encompasses data security measures to safeguard sensitive information, workload management techniques for efficient resource utilization, environment management to provision and configure the necessary infrastructure, backup strategies to protect against data loss, continuous integration and deployment processes for seamless updates, and comprehensive monitoring mechanisms to track the health and performance of the Data Ocean. By addressing these operational aspects, organizations can maintain a robust and secure data environment that supports data-driven decision-making and empowers users to derive maximum value from the Data Ocean solution.
Data security is a critical aspect of the Data Ocean architecture. It involves the implementation of data security measures to protect sensitive data from unauthorised access, loss, or breach. It covers encryption, access controls, authentication, and data privacy techniques, ensuring the confidentiality, integrity, and availability of data within the Data Ocean.
Workload and workflow management are critical aspects of efficient data processing in a data ecosystem. Workload management encompasses strategies for distributing tasks across computing resources, optimising resource allocation, and enhancing performance. Workflow management involves various activities such as data transformation, integration, and enrichment. Data transformation ensures standardisation and compatibility, while integration enables seamless data merging for comprehensive analysis. Data enrichment enhances data by incorporating additional relevant information. Effective workflow management streamlines processes, maximising data asset utilisation, and delivering timely, accurate, and valuable insights to end-users.
Environment management focuses on managing the infrastructure and software environments required for the Data Ocean. This sub-chapter covers aspects such as infrastructure provisioning, configuration management, and version control of software components. It ensures that the Data Ocean environment remains stable, up-to-date, and properly configured to support data processing and analytics operations.
Data backup is an essential component of data management, ensuring the protection and recoverability of data in the event of data loss or system failures. This sub-chapter discusses backup strategies, including regular backups, incremental backups, and off-site storage, to mitigate the risk of data loss and ensure data resilience.
Continuous Integration and Continuous Deployment (CI/CD) practices are employed to streamline the development, testing, and deployment of data-related components within the Data Ocean. This sub-chapter explores the integration of CI/CD pipelines to automate and accelerate the deployment of data pipelines, data transformations, and other data-related processes.
Monitoring plays a crucial role in ensuring the health, performance, and availability of the Data Ocean infrastructure and data processing workflows. This sub-chapter focuses on implementing monitoring tools and techniques to track system performance, detect anomalies, and proactively address potential issues. It covers aspects such as real-time monitoring, log analysis, and alerting mechanisms to ensure the reliability and stability of the Data Ocean environment.
By addressing these critical aspects, the organisation can establish a robust and efficient Data Ocean solution that supports their data-driven initiatives and enables them to derive maximum value from their data assets.
The Reference Architecture provides the organisation with a comprehensive and scalable framework for building their Data Ocean solution.
By following the guidelines and best practices outlined in this reference architecture, he organisation can ensure data quality, security, and scalability, while enabling advanced analytics and data-driven decision-making.
The architecture's modular and flexible nature allows for customisation and adaptation to meet specific business requirements.