Data Science Studio (DSS) from Dataiku is a complete Data Science software tool for data scientists, analysts and machine learning experts to perform data analysis and modelling more efficiently. DSS significantly shortens the time-consumed during data cleaning, model buildings and other statistical processes.
DSS enables direct and fast connection to the most common sources of data with strong integration capabilities. Analysts can leverage these smart data types to validate and transform the data in an automated way.
They can also perform more mundane tasks such as replacement, grouping, splitting, calculating, and others on DSS’s interface which shows them an instant visual feedback of any operation.
Some Basic Concepts of Data Science Studio:
Projects
Each task in DSS is organized in individual projects to manage the data and associated tasks. The main dashboard is called as 'Universe' and projects arranged will look like this:
Flow
A DSS project is structured in the form of a flow. It visually represents a data pipeline and how datasets are
Datasets
Datasets
Data Science Studio supports various kinds of datasets. For example :
Recipes
data structure.Recipes
Any pre-processing or data manipulation on the datasets are managed using recipes. Recipes are the building blocks of your data applications. Each time you make a transformation, an aggregation, a join, … with the Data Science Studio, you will be creating a recipe.
Recipes have input datasets and output datasets, and they indicate how to create the output datasets from the input datasets.
Data Science Studio supports various kind of recipes :
Building datasets
Recipes and datasets together create the graph of the relationships between the datasets and how to build them. This graph is called the Flow. It is used by the dependencies management engine to automatically keep your output datasets up to date each time your input datasets or recipes are modified.
There are two types of recipes used widely in DSS:
Visual recipes: Provide basic manipulation functionalities like data cleaning, filtering, grouping etc.
Code recipes: Used for integrating technical programming like R, Python etc.
Dashboard
The dashboard communicates result and give insights based on the analysis performed on the datasets.
Analysis
This provides visual analysis of the dataset prior to the implementation on the flow which helps to dive deep into the data directly.
Other concepts
Jobs: Every build on the dataset is recorded as jobs to keep track of activities in the flow
Scenarios: Helps in automating and scheduling the tasks in the flow
Lab - Notebooks: DSS allows to draft code in interactive programming environment to make the analysis easy and efficient
Web Apps: Users with Web coding skills can create advanced custom Web Apps using our dedicated editor and REST API
For more introduction on concepts of DSS, please navigate here .
Data Science Studio reads data from the outside world in “external” datasets. On the other hand, when you use the Data Science Studio to create new datasets from recipes, these new datasets are “managed” datasets. This means that Data Science Studio “takes ownership” of these output datasets. For example, if the managed dataset is a SQL dataset, Data Science Studio will be able to drop / create the table, change its schema, ...
Managed datasets are created by Data Science Studio in “managed connections”, which act as data stores. Managed datasets can be created:
Partitioning
Partitioning serves several purposes in Data Science Studio. Partitioning refers to the splitting of the dataset along meaningful dimensions. Each partition contains a subset of the dataset.
For example, a dataset representing a database of customers could be partitioned by country.
There are two kinds of partitioning dimensions :
- “Discrete” partitioning dimension. The dimension has a small number of values. For example : country, business unit
- “Time” partitioning dimension. The dataset is divided in fixed periods of time (year, month, day or hour). Time partitioning is the most common pattern when dealing with log files.
A dataset can be partitioned by more than one dimension. For example, a dataset of web logs could be partitioned by day and by the server which generated the log line.










