4.0 Solutions in Ceramic and Frit Manufacturing Process
Research | 14/05/2024 | 10 minutes read
Industry 4.0, Industrial Intelligence, Smart Manufacturing or Industrial Internet
It is no longer a surprise to hear in any media talk about big data, artificial intelligence (AI), the internet of things (IoT), industry 4.0, that data is the new oil of the 21st century, that every day generate new terabytes of data and, above all, that this trend will continue to increase significantly.
Industry 4.0 is the gateway to more precise and efficient production control, cost optimization and production flexibility. The manufacturing processes of ceramic, fried or sanitary tiles are not immune to this revolution. If something characterizes these industrial processes, it is their considerable operating costs, which makes the progressive transformation towards digitalization in these sectors of maximum interest and key in an increasingly competitive market.
In the frit manufacturing process, more than 30 basic process variables are identified, while in ceramic manufacturing processes this figure exceeds 50. If we add parameters indicative of the quality of transport, the state of those equipment that can affect to the start/stop of production or the quality of the product (i.e. vibrations/noise in motors), environmental conditions… the volume of data can scale to dimensions that are impossible to manage with traditional technologies.
The introduction of new technologies that are specifically designed for this is necessary.
Big Data technology. The three “v”.
The main characteristics that differentiate big data technologies from traditional technologies are represented by the three Vs: volume, velocity and variety.
Volume. Due to growing digitalization and the emergence of new sources of information, the amount of data generated in companies grows every year.
Massive data exploitation in ceramic processes will go from managing information in the order of megabytes (MB) to the order of petabytes (PB).
Velocity. We can difference two main forms of data management:
1. Batch
Traditionally, data processing has been carried out in batch mode, that is, launching in a planned manner processes that collect and process batches of information that has been previously stored in a database
2. Streaming
But, in the big data world, it is increasingly necessary to do streaming processing, that is, to process data that arrives in a continuous and uninterrupted flow. This scenario is common, for example, in the Internet of Things, where it is necessary to process data from sensors that are sending this type of information flows, or in the world of cybersecurity, where responses must be immediate.
Streaming processing poses new challenges, such as processing data as it arrives and providing responses in near real time. This is something that traditional technologies, intended for batch processing, were not designed for.
Variety. Traditional databases were designed to work with structured information. This is stored in tables, structures with a schema that has been predefined at the time the database is designed and that cannot be changed easily.
The schema determines the type of data that can be saved in the table. This type of storage was not designed, therefore, to manage unstructured information, such as texts, images, audio. It is also not suitable for information that requires flexible and changing schemes, such as, for example, data from different types of sensors.
What challenges will companies in the tile and frit sectors have to overcome on the way to the Industry 4.0?
The path towards the digitalization must allways inevitably begin with two critical stages:
- a clear and comprehensive evaluation of the initial state of the factory in aspects related to the data collection and analysis system, using a madurity model
- definition of a strategy and specific objectives to achieve the degree of automation and digitalization pursued
At the same time, there are two key figures that are inseparable from a successful digitalization process:
- a technological partner that will accompany us from the beginning to define the strategy as well as in the development of software, interfaces, automation…
- the figure of the data scientist in the structure, which guarantees obtaining maximum value from the new data flow
i. Big Data Maturity Models (BDMM), or... Where we are?
A maturity model determines a series of stages or levels characterized by qualitative aspects of a
organization that are usually related to culture, people, processes or technologies. Its objectives are:
- Assess the current state, providing big data capabilities assessment tools in key areas related to people, processes and technologies
- Help define which state you want to go to and establish milestones that must be met to get there.
- Help define the actions necessary to move from the current state to the desired one. In this case guide the time to correctly establish and develop a series of big data capabilities.
A big data maturity model will aim to evaluate capabilities related to the following aspects of the organization:
Big Data Madurity Model. Aspects of the company under evaluation.
TDWI Big Data Maturity Model
TDWI is one of the most detailed models to evaluate companies’, defining different stages of madurity:
ii. Strategy Definition
An adequate data strategy must look at how to put data at the service of aspirations
strategic objectives of the organization and business objectives. Once this strategy is developed, a roadmap aligned with it can be established that will help in two aspects:
- On the one hand, it will serve as a basis for making decisions about the characteristics that the technologycal platform should have that serves as a basis for all subsequent initiatives.
- On the other hand, it will allow initiatives and investments to be prioritized to reach a project plan that allows building the technological platform and, subsequently, develop on it all the projects related to the exploitation of corporate data.
One may wonder how a company can move from a data strategy, defined at a very high level of
abstraction, to a roadmap defined with sufficient level of detail to indicate the steps that must be followed, the initiatives that must be proposed and the capacities that must be developed.
One way to do this is by using the capabilities-based model:
Capabilities-based model: from the strategy definition to the organization roadmap.
The company's strategic lines are taken as a starting point.
Business objectives are defined aligned with the strategic lines.
The objectives must be concrete and tangible and have a series of metrics associated with them that allow measure them.
Use cases are defined that help us achieve the established business objectives.
A use case is a story about the interaction between a user and a system. A business goal indicates what you want to achieve while the use case indicates how to achieve it.
It is about associating use cases with each business objective defined in the previous point.
The capabilities that will have to be developed to be able to implement the use cases in the previous point are defined in detail.
Capabilities represent a collection of business processes, people and technology that allow you to satisfy a specific purpose.
Capacities are quantifiable; they can be associated with an effort, a cost or a value. In this way, different criteria can be applied that will allow creating a prioritized list with the capabilities that you want to develop.
The way to use this model is through a series of steps that go from top to bottom. That is, you start with the data strategy and go down until you have a list of capacities to develop adequately defined and prioritized.
iii. The Data Scientist
The data scientist is responsible for extracting all possible value from the data, through descriptive analysis – which answers «what happened», predictive analysis – which answers «what will happen» through forecasting models – and through models prescriptive, such as recommendations or optimization models, which will provide us with the necessary actions to manage future situations.
Data scientists are characterized by mastering the following competencies:
Statistics
Not only for the application and implementation of algorithms, but to help the company understand the data in detail and reveal the hidden relationships that each variable or characteristic has with the rest, within a data set or dataset. A good data scientist will know how to apply statistical tests to verify their hypotheses regarding the nature of the data and, of course, will have notions of statistical modeling.
Development of software
It is something evident, since more and more companies move in digital environments or are carrying out processes of digitization; added to this, a company’s data can come from different sources (heterogeneous) and be in different access points (databases, Excel files, plain text files,
internet files, PDF, etc.). Therefore, a data scientist has to know software development
(mainly, oriented to scientific programming) to load the data as datasets, know how to clean them and transform them so that they make sense.
Communication
This type of competition has more to do with the so-called soft skills. Not only is it enough to process data, transform it, apply models and algorithms to obtain results, but it is essential to know how to communicate these results and, on the other hand, know how to carry out
data visualizations for communicate results of a process, a project or an investigation visually.
Knowledge of the domain
The data can be from any area or sector (banking, health, finance, industry, etc.). An advantage to knowing how to interpret the information that can be extracted from the data and knowing what questions can be asked to resolve the data is to be knowledgeable about the domain that the data presents.