<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://wiki.ai-redgio50.s5labs.eu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admino739mjm7</id>
	<title>AI-REDGIO 5.0 - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.ai-redgio50.s5labs.eu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admino739mjm7"/>
	<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Special:Contributions/Admino739mjm7"/>
	<updated>2026-04-18T14:11:12Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AIREDGIO5.0_AI_Enabling_Tools&amp;diff=569</id>
		<title>AIREDGIO5.0 AI Enabling Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AIREDGIO5.0_AI_Enabling_Tools&amp;diff=569"/>
		<updated>2026-02-16T12:09:22Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tools developed in the context of the AI REDGIO 5.0 project to assist in your Industry 5.0 journey&lt;br /&gt;
&lt;br /&gt;
== AI REDGIO 5.0 Collaborative Intelligence Platform &#039;&#039;by SCCH&#039;&#039; ==&lt;br /&gt;
* [http://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Collaborative_Intelligence_Platform Collaborative Intelligence Platform]&lt;br /&gt;
== AI REDGIO 5.0 Open Hardware Platform &#039;&#039;by Libelium / HOPU&#039;&#039; ==&lt;br /&gt;
* [http://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Open_Hardware_Platform Open Hardware Platform]&lt;br /&gt;
== AI REDGIO 5.0 Open Hardware Platform v2 &#039;&#039;by Libelium / HOPU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Open_Hardware_Platform_v2 Open Hardware Platform v2]&lt;br /&gt;
== AI REDGIO 5.0 AI Pipeline Designer &#039;&#039;by SUITE5&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_AI_Pipeline_Designer AI Pipeline Designer]&lt;br /&gt;
== AI REDGIO 5.0 Smart Data Enabler &#039;&#039;by SMC&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler Smart Data Enabler]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=568</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=568"/>
		<updated>2026-02-16T12:08:21Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
&lt;br /&gt;
[[File:SmartDataEnabler02.png|center|x300px|Image Caption]]&lt;br /&gt;
&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* [https://nifi.apache.org/ Apache NiFi]: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* [https://www.min.io/ MinIO]: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* [https://www.influxdata.com InfluxDB]: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* [https://grafana.com/ Grafana]: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
&lt;br /&gt;
[[File:SmartDataEnabler022.png|center|x300px|Image Caption]]&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (small data)&#039;&#039;&#039; Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies. A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (large data)&#039;&#039;&#039; The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
# &#039;&#039;&#039;Data Center environmental monitoring&#039;&#039;&#039; Monitoring of a data center environmental measures expressed in SenML standard. In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
# &#039;&#039;&#039;Robotic arm telemetry&#039;&#039;&#039; Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes). Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=567</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=567"/>
		<updated>2026-02-16T12:07:48Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
[[File:SmartDataEnabler01.png|center|x300px|Image Caption]]&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* [https://nifi.apache.org/ Apache NiFi]: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* [https://www.min.io/ MinIO]: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* [https://www.influxdata.com InfluxDB]: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* [https://grafana.com/ Grafana]: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
&lt;br /&gt;
https://wiki.ai-redgio50.s5labs.eu/images/c/cc/SmartDataEnabler022.png&lt;br /&gt;
&lt;br /&gt;
[[File:SmartDataEnabler022.png|center|x300px|Image Caption]]&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (small data)&#039;&#039;&#039; Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies. A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (large data)&#039;&#039;&#039; The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
# &#039;&#039;&#039;Data Center environmental monitoring&#039;&#039;&#039; Monitoring of a data center environmental measures expressed in SenML standard. In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
# &#039;&#039;&#039;Robotic arm telemetry&#039;&#039;&#039; Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes). Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=566</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=566"/>
		<updated>2026-02-16T12:06:14Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
[[File:SmartDataEnabler01.png|center|x300px|Image Caption]]&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* [https://nifi.apache.org/ Apache NiFi]: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* [https://www.min.io/ MinIO]: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* [https://www.influxdata.com InfluxDB]: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* [https://grafana.com/ Grafana]: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
[[File:SmartDataEnabler22.png|center|x300px|Image Caption]]&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (small data)&#039;&#039;&#039; Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies. A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (large data)&#039;&#039;&#039; The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
# &#039;&#039;&#039;Data Center environmental monitoring&#039;&#039;&#039; Monitoring of a data center environmental measures expressed in SenML standard. In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
# &#039;&#039;&#039;Robotic arm telemetry&#039;&#039;&#039; Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes). Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=565</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=565"/>
		<updated>2026-02-16T12:05:14Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
[[File:SamrtDataEnabler01.png|center|x300px|Image Caption]]&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* [https://nifi.apache.org/ Apache NiFi]: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* [https://www.min.io/ MinIO]: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* [https://www.influxdata.com InfluxDB]: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* [https://grafana.com/ Grafana]: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
[[File:SmartDataEnabler22.png|center|x300px|Image Caption]]&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (small data)&#039;&#039;&#039; Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies. A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (large data)&#039;&#039;&#039; The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
# &#039;&#039;&#039;Data Center environmental monitoring&#039;&#039;&#039; Monitoring of a data center environmental measures expressed in SenML standard. In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
# &#039;&#039;&#039;Robotic arm telemetry&#039;&#039;&#039; Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes). Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:SmartDataEnabler022.png&amp;diff=564</id>
		<title>File:SmartDataEnabler022.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:SmartDataEnabler022.png&amp;diff=564"/>
		<updated>2026-02-16T12:04:46Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:SmartDataEnabler02.png&amp;diff=563</id>
		<title>File:SmartDataEnabler02.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:SmartDataEnabler02.png&amp;diff=563"/>
		<updated>2026-02-16T12:03:37Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=562</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=562"/>
		<updated>2026-02-16T12:03:06Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
[[File:SamrtDataEnabler01.png|center|x300px|Image Caption]]&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* [https://nifi.apache.org/ Apache NiFi]: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* [https://www.min.io/ MinIO]: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* [https://www.influxdata.com InfluxDB]: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* [https://grafana.com/ Grafana]: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
[[File:SmartDataEnabler02.png|center|x300px|Image Caption]]&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (small data)&#039;&#039;&#039; Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies. A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (large data)&#039;&#039;&#039; The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
# &#039;&#039;&#039;Data Center environmental monitoring&#039;&#039;&#039; Monitoring of a data center environmental measures expressed in SenML standard. In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
# &#039;&#039;&#039;Robotic arm telemetry&#039;&#039;&#039; Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes). Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=561</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=561"/>
		<updated>2026-02-16T11:55:48Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* [https://nifi.apache.org/ Apache NiFi]: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* [https://www.min.io/ MinIO]: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* [https://www.influxdata.com InfluxDB]: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* [https://grafana.com/ Grafana]: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (small data)&#039;&#039;&#039; Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies. A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
# &#039;&#039;&#039;Electrical panel monitoring (large data)&#039;&#039;&#039; The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
# &#039;&#039;&#039;Data Center environmental monitoring&#039;&#039;&#039; Monitoring of a data center environmental measures expressed in SenML standard. In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
# &#039;&#039;&#039;Robotic arm telemetry&#039;&#039;&#039; Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes). Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=560</id>
		<title>AI REDGIO 5.0 Smart Data Enabler</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Smart_Data_Enabler&amp;diff=560"/>
		<updated>2026-02-16T11:50:28Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: Created page with &amp;quot;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;  How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.  == Asset Objectives == &amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt; Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools. SDE application goal is to demonstrate how a SME can easily run a short number of simple steps...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;AI REDGIO 5.0 Smart Data Enabler&amp;lt;/Strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
How an organization relying in Manufacturing, with limited IT expertise, can easily gain insights from its process data.&lt;br /&gt;
&lt;br /&gt;
== Asset Objectives ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Smart Data Enabler (SDE) is a basic stack consisting in an AI Edge-to-Cloud minimum viable application using AI REDGIO 5.0 recommended Open-Source tools.&lt;br /&gt;
SDE application goal is to demonstrate how a SME can easily run a short number of simple steps in order to conduct an initial exploration and analysis of their (edge) production data.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE is a technological demo showing how a very simple architecture based on open-source tools recommended in AI REDGIO 5.0 project allows SMEs to obtain useful analytical insights from their process data even in conditions of limited maturity and complexity, with reduced technological expertise and practically zero costs/investments.&lt;br /&gt;
Once SDE has been easily installed locally, the company only needs to adapt the pipeline provided to its usage scenario, launch it, and start exploring its data in search of interesting behaviors.&lt;br /&gt;
&lt;br /&gt;
Production data can be a JSON file like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;operation&amp;quot;: &amp;quot;shift&amp;quot;,&lt;br /&gt;
  &amp;quot;spec&amp;quot;: {&lt;br /&gt;
    &amp;quot;deviceId&amp;quot;: &amp;quot;sensor-001&amp;quot;,&lt;br /&gt;
    &amp;quot;timestamp&amp;quot;: &amp;quot;2025-12-10 10:56:02&amp;quot;,&lt;br /&gt;
    &amp;quot;readings&amp;quot;: {&lt;br /&gt;
      &amp;quot;temperature&amp;quot;: &amp;quot;12.4&amp;quot;,&lt;br /&gt;
      &amp;quot;humidity&amp;quot;: &amp;quot;4.2&amp;quot;,&lt;br /&gt;
      &amp;quot;pressure&amp;quot;: &amp;quot;18.5&amp;quot;,&lt;br /&gt;
      &amp;quot;battery&amp;quot;: &amp;quot;-1.12&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;status&amp;quot;: &amp;quot;OK&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application stack is composed by the following open-source technologies, whose (integrated) usage is recommended by AIREDGIO 5.0 project:&lt;br /&gt;
* Apache NiFi: easy to use, powerful, and reliable open-source system to process and distribute data, particularly suitable for integrating IoT data sources&lt;br /&gt;
* MinIO: high-performance, software-defined Object Storage server, a sort of an open-source, private version of Amazon S3&lt;br /&gt;
* InfluxDB: specialized open-source database designed to handle data that is indexed by time, fitting best when capturing streams of measurements coming from a sensors&lt;br /&gt;
* Grafana: open-source visualization and analytics platform, allowing you to query, visualize, alert on, and understand your metrics no matter where they are stored&lt;br /&gt;
and this is how the scenario turns into architecture, leveraging the previous technologies:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nevertheless, the architecture is ready for various types of improvements thanks to AI.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use cases ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
SDE application consists of a base stack and four ready-to-use industrial case studies on top:&lt;br /&gt;
1.	Electrical panel monitoring (small data)&lt;br /&gt;
Four IIoT sensors – one for each phase R, S, T and Neutral - positioned inside a critical high-voltage substation wide electrical panel, a cooled environment for which it is fundamental to check data in order to highlight anomalies.&lt;br /&gt;
A relevant objective is to detect whether one of the terminals is loosening (causing an increase in resistance and therefore heat) before an electric arc is triggered.&lt;br /&gt;
2.	Electrical panel monitoring (large data)&lt;br /&gt;
The same use case, with the difference that the data is not small and directly entered into the pipeline, but rather large and read from an external file.&lt;br /&gt;
3.	Data Center environmental monitoring&lt;br /&gt;
Monitoring of a data center environmental measures expressed in SenML standard.&lt;br /&gt;
In this use case, we simulate a reading every 60 seconds for a control unit that monitors temperature and humidity.&lt;br /&gt;
4.	Robotic arm telemetry&lt;br /&gt;
Monitoring a robotic arm activity.&lt;br /&gt;
In an industrial robotic arm (e.g., an anthropomorphic robot on an assembly line), telemetry monitors not only the environment but also the mechanical and electrical status of individual joints (axes).&lt;br /&gt;
Critical variables usually concern the position, current consumption (which indicates stress), and temperature of the motors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/AI-REDGIO5.0-E2C-OS-Smart-Data-Enabler Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.smc.it/ SMC]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;alessandro.cecconi@smc.it&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Self-signed_Certificates_Generation_Tool&amp;diff=559</id>
		<title>Self-signed Certificates Generation Tool</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Self-signed_Certificates_Generation_Tool&amp;diff=559"/>
		<updated>2025-11-03T12:41:42Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Dataset Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Generation of self-signed certificates compatible with Arrowhead. &amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This tool supports generation of self-signed certificates (.p12 fromat) suitable to be used for [https://arrowhead.eu/ Arrowhead] platform.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Tool Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The tool is developed in Python and can generate all needed certificates for the Arrowhead clients according to the predefined specifications. Specifications needs to be defined manually in configuration files. Then the Python script is executed that creates Intermediate (cloud) certificates, End-entitiy (client) certificates and truststore.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
For walk through instructions see the documentation of the [https://github.com/CuAuPro/cryptogen-python repository].&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Available. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Open source, MIT license&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Implementation available [https://github.com/CuAuPro/cryptogen-python here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://www.ijs.si/ijsw/V001/JSI Jožef Stefan Institute (JSI)]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This work was funded by AI REDGIO 5.0 (101092069). The dataset will be used in the AI REDGIO 5.0 Didactic Factory Pilot DFIII: Self-evolving monitoring systems for assembly production lines.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Library]][[Category:Executable]][[Category:Edge]][[Category:Cloud-based]][[Category:Security]][[Category:Python]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=558</id>
		<title>Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=558"/>
		<updated>2025-11-03T12:40:09Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Dataset Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Continual Learning paradigm avoiding Catastrophic Forgetting for generic tabular-data problems. &amp;lt;/strong&amp;gt; [[File:Tril3 architecture.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: TRIL3 architecture and data flow.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
New methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Framework Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Continual learning (CL) poses the important challenge of adapting to evolving data distributions without forgetting previously acquired knowledge while consolidating new knowledge. In this paper, we introduce a new methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems. TRIL3 uses the prototype-based incremental generative model XuILVQ to generate synthetic data to preserve old knowledge and the DNDF algorithm, which was modified to run in an incremental way, to learn classification tasks for tabular data, without storing old samples. After different tests to obtain the adequate percentage of synthetic data and to compare TRIL3 with other CL available proposals, we can conclude that the performance of TRIL3 outstands other options in the literature using only 50% of synthetic data.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The publication is available in the [https://doi.org/10.48550/arXiv.2407.09039 link]. Please use the following reference:&lt;br /&gt;
&#039;&#039;García-Santaclara, Pablo &amp;amp; Fernández-Castro, Bruno &amp;amp; Redondo, Rebeca. (2024). Overcoming Catastrophic Forgetting in Tabular Data Classification: A Pseudorehearsal-based approach. 10.48550/arXiv.2407.09039&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Not available &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Not available&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The publication is available [https://doi.org/10.48550/arXiv.2407.09039 here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact mmarquez@gradiant.org.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Part of the associated work to this paper was carried out by Gradiant under the AI REDGIO 5.0 project initiative.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Theoretical Foundations]][[Category:Edge AI]][[Category:Machine Learning]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets&amp;diff=557</id>
		<title>Acceleration Datasets</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets&amp;diff=557"/>
		<updated>2025-10-31T11:19:44Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Maturity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Acceleration datasets for anomaly detection and predictive diagnosis in industrial automation.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The asset consists of datasets containing accelerations (along three axes) measured by means of a sensor board while the AI-REDGIO 5.0 E2mech experiment was running.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Each dataset is represented in a csv file, where each column contains the acceleration measured for a time horizon equal to 4 times the mechanism&#039;s rotating period. Also, this data corresponds to a given angular speed and a healthy or faulty condition. The healthy and faulty information is included (in two separate datasets). This corresponds to different control algorithms adopted to steer the experiment mechanism. Conditions corresponding to the mechanism running at different angular speeds are provided (each with its own healthy and faulty scenario).&lt;br /&gt;
All the information concerning the axis along which the measurement has been taken, the system’s angular speed, and the healthy or faulty situation is provided in the file name.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
We provide a set of files concerning a pretty specific system’s condition (in terms of speed, measurements, healthiness), so that the interested user can either perform analysis, grouping them as needed (e.g., by selecting only a given acceleration axis or a system’s speed) and without the need of downloading a huge file (e.g. in json format) to be then parsed and split according to the desired work of analysis.&lt;br /&gt;
In this respect, given the description of the data above, it should be pretty straightforward to import the files in one’s preferred data analytics tool (python, Matlab, R, etc…) and test condition monitoring and anomaly detection algorithms.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The datasets concerning the 2nd iteration of the E2Mech experiment in the AI-REDGIO 5.0 project have been completed in September 2025.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Open source&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Datasets available in [https://github.com/AI-REDGIO-5-0/E2Mech_DataSet GitHub]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://dei.unibo.it/en/research/research-groups/actema ACTEMA]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This work was funded by AI REDGIO 5.0 (101092069). The dataset has been created in the context of the AI REDGIO 5.0 E2Mech Experiment.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Dataset]][[Category:Predictive Maintenance]][[Category:Anomaly Detection]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets&amp;diff=556</id>
		<title>Acceleration Datasets</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets&amp;diff=556"/>
		<updated>2025-10-31T11:18:24Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Acceleration datasets for anomaly detection and predictive diagnosis in industrial automation.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The asset consists of datasets containing accelerations (along three axes) measured by means of a sensor board while the AI-REDGIO 5.0 E2mech experiment was running.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Each dataset is represented in a csv file, where each column contains the acceleration measured for a time horizon equal to 4 times the mechanism&#039;s rotating period. Also, this data corresponds to a given angular speed and a healthy or faulty condition. The healthy and faulty information is included (in two separate datasets). This corresponds to different control algorithms adopted to steer the experiment mechanism. Conditions corresponding to the mechanism running at different angular speeds are provided (each with its own healthy and faulty scenario).&lt;br /&gt;
All the information concerning the axis along which the measurement has been taken, the system’s angular speed, and the healthy or faulty situation is provided in the file name.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
We provide a set of files concerning a pretty specific system’s condition (in terms of speed, measurements, healthiness), so that the interested user can either perform analysis, grouping them as needed (e.g., by selecting only a given acceleration axis or a system’s speed) and without the need of downloading a huge file (e.g. in json format) to be then parsed and split according to the desired work of analysis.&lt;br /&gt;
In this respect, given the description of the data above, it should be pretty straightforward to import the files in one’s preferred data analytics tool (python, Matlab, R, etc…) and test condition monitoring and anomaly detection algorithms.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The complete datasets concerning the 2nd iteration of the E2Mech experiment in the AI-REDGIO 5.0 project has been completed in September.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Open source&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Datasets available in [https://github.com/AI-REDGIO-5-0/E2Mech_DataSet GitHub]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://dei.unibo.it/en/research/research-groups/actema ACTEMA]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This work was funded by AI REDGIO 5.0 (101092069). The dataset has been created in the context of the AI REDGIO 5.0 E2Mech Experiment.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Dataset]][[Category:Predictive Maintenance]][[Category:Anomaly Detection]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=555</id>
		<title>AI Reference Implementations</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=555"/>
		<updated>2025-10-20T12:46:23Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A compilation of tools and resources that can help you kickstart your Industry 5.0 journey.&lt;br /&gt;
&lt;br /&gt;
== Quality Control in Industry with CV and TinyML &#039;&#039;by ExpertAI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Quality_Control_in_Industry_with_CV_and_TinyML Quality Control in Industry with CV and TinyML]&lt;br /&gt;
&lt;br /&gt;
== FPGA Device Plugin &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=FPGA_Device_Plugin FPGA Device Plugin for Kubernetes]&lt;br /&gt;
&lt;br /&gt;
== XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=XuILVQ:_A_River_Implementation_of_the_Incremental_Learning_Vector_Quantization_for_IoT XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis Dashboard &#039;&#039;by SCCH&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard Data Analysis Dashboard]&lt;br /&gt;
&lt;br /&gt;
== Energy Usage Dataset and Model &#039;&#039;by PBN&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Energy_Usage_Dataset Energy Usage Dataset and Model]&lt;br /&gt;
&lt;br /&gt;
== Fabric Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fabrics_Defect_Dataset Fabric Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== Fashion Product Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fashion_Product_Defects_Dataset Fashion Product Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== SmartSpot IoT Device &#039;&#039;by Libelium / HOPU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=SmartSpot_IoT_Device SmartSpot IoT Device]&lt;br /&gt;
&lt;br /&gt;
== Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Labelled_Force/Torque_Time_Series_from_Robotic_Wheel_Assembly_Dataset Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset]&lt;br /&gt;
&lt;br /&gt;
== Anomaly Detection in Force/Torque Time Series From Delta Robot &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Anomaly_Detection_in_Force/Torque_Time_Series_From_Delta_Robot Anomaly Detection in Force/Torque Time Series From Delta Robot]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File Orthogonal Views Extractor from STEP File]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File - PythonOCC Version &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File_-_PythonOCC_Version Orthogonal Views Extractor from STEP File - PythonOCC Version]&lt;br /&gt;
&lt;br /&gt;
== Image Colour Count wih Python &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Image_Colour_Count_with_Python Orthogonal Image Colour Count with Python]&lt;br /&gt;
&lt;br /&gt;
== Bearing Fault Datasets &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Bearing_Fault_Datasets Bearing Fault Datasets]&lt;br /&gt;
&lt;br /&gt;
== D2P Toolbox &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=D2P_Toolbox D2P Toolbox]&lt;br /&gt;
&lt;br /&gt;
== Pneumatic Pressure and Electrical Current Time-Series &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Pneumatic_Pressure_and_Electrical_Current_Time_Series Pneumatic Pressure and Electrical Current Time-Series]&lt;br /&gt;
&lt;br /&gt;
== Error In Alignment (ERAL) Algorithm &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Error_In_Alignment_(ERAL)_Algorithm Error In Alignment (ERAL) Algorithm]&lt;br /&gt;
&lt;br /&gt;
== Self-signed Certificates Generation Tool &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Self-signed_Certificates_Generation_Tool Self-signed Certificates Generation Tool]&lt;br /&gt;
&lt;br /&gt;
== Acceleration Datasets &#039;&#039;by ACTEMA&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets Acceleration Datasets]&lt;br /&gt;
&lt;br /&gt;
== Cream Cheese Production and Quality Dataset &#039;&#039;by GRADIANT and Quescrem&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset Cream Cheese Production and Quality Dataset]&lt;br /&gt;
&lt;br /&gt;
== Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3) &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3) Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)]&lt;br /&gt;
&lt;br /&gt;
== AM Tomography Image Processing Algorithm &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=AM_Tomography_Image_Processing_Algorithm AM Tomography Image Processing Algorithm]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=554</id>
		<title>Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=554"/>
		<updated>2025-10-20T12:45:28Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Continual Learning paradigm avoiding Catastrophic Forgetting for generic tabular-data problems. &amp;lt;/strong&amp;gt; [[File:Tril3 architecture.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: TRIL3 architecture and data flow.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
New methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Continual learning (CL) poses the important challenge of adapting to evolving data distributions without forgetting previously acquired knowledge while consolidating new knowledge. In this paper, we introduce a new methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems. TRIL3 uses the prototype-based incremental generative model XuILVQ to generate synthetic data to preserve old knowledge and the DNDF algorithm, which was modified to run in an incremental way, to learn classification tasks for tabular data, without storing old samples. After different tests to obtain the adequate percentage of synthetic data and to compare TRIL3 with other CL available proposals, we can conclude that the performance of TRIL3 outstands other options in the literature using only 50% of synthetic data.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The publication is available in the [https://doi.org/10.48550/arXiv.2407.09039 link]. Please use the following reference:&lt;br /&gt;
&#039;&#039;García-Santaclara, Pablo &amp;amp; Fernández-Castro, Bruno &amp;amp; Redondo, Rebeca. (2024). Overcoming Catastrophic Forgetting in Tabular Data Classification: A Pseudorehearsal-based approach. 10.48550/arXiv.2407.09039&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Not available &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Not available&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The publication is available [https://doi.org/10.48550/arXiv.2407.09039 here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact mmarquez@gradiant.org.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Part of the associated work to this paper was carried out by Gradiant under the AI REDGIO 5.0 project initiative.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Theoretical Foundations]][[Category:Edge AI]][[Category:Machine Learning]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=553</id>
		<title>Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=553"/>
		<updated>2025-10-20T12:45:05Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Continual Learning paradigm avoiding Catastrophic Forgetting for generic tabular-data problems. &amp;lt;/strong&amp;gt; [[File:Tril3 architecture.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: TRIL3 architecture and data flow.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
New methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Continual learning (CL) poses the important challenge of adapting to evolving data distributions without forgetting previously acquired knowledge while consolidating new knowledge. In this paper, we introduce a new methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems. TRIL3 uses the prototype-based incremental generative model XuILVQ to generate synthetic data to preserve old knowledge and the DNDF algorithm, which was modified to run in an incremental way, to learn classification tasks for tabular data, without storing old samples. After different tests to obtain the adequate percentage of synthetic data and to compare TRIL3 with other CL available proposals, we can conclude that the performance of TRIL3 outstands other options in the literature using only 50% of synthetic data.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The publication is available in the [https://doi.org/10.48550/arXiv.2407.09039 link]. Please use the reference below:&lt;br /&gt;
&#039;&#039;García-Santaclara, Pablo &amp;amp; Fernández-Castro, Bruno &amp;amp; Redondo, Rebeca. (2024). Overcoming Catastrophic Forgetting in Tabular Data Classification: A Pseudorehearsal-based approach. 10.48550/arXiv.2407.09039&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Not available &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Not available&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The publication is available [https://doi.org/10.48550/arXiv.2407.09039 here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact mmarquez@gradiant.org.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Part of the associated work to this paper was carried out by Gradiant under the AI REDGIO 5.0 project initiative.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Theoretical Foundations]][[Category:Edge AI]][[Category:Machine Learning]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AM_Tomography_Image_Processing_Algorithm&amp;diff=552</id>
		<title>AM Tomography Image Processing Algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AM_Tomography_Image_Processing_Algorithm&amp;diff=552"/>
		<updated>2025-10-20T12:44:36Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: Created page with &amp;quot;&amp;lt;strong&amp;gt;Image processing module for extracting the melt pool area and center coordinates from the additive manufacturing tomography. &amp;lt;/strong&amp;gt; &amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: Outputs of the image processing module.&amp;lt;/div&amp;gt;  == Asset Description == &amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt; Processing algorithm that automatically calculates the melt pool area and the melt pool center coordinates, extracting these...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Image processing module for extracting the melt pool area and center coordinates from the additive manufacturing tomography. &amp;lt;/strong&amp;gt; [[File:Manufacturing tomography.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: Outputs of the image processing module.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Processing algorithm that automatically calculates the melt pool area and the melt pool center coordinates, extracting these values from the corresponding melt pool tomography image of the additive manufacturing process.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The implementation of this algorithm is framed within the execution of an experiment that focuses on the provision of real-time monitoring through AI at the Edge in an additive manufacturing (AM) process for early defect detection. In particular, geometrical deformations in metal pieces that are manufactured.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The main reason why defects occur in this particular manufacturing process is the accumulation of the heat applied as layers on the metal piece increase. For this reason, besides the analysis and monitoring of parameters such as the laser power, position angles, position coordinates, etc., the analysis of the melt pool tomography image of the AM process, together with the evolution of the z coordinate, is critical. In particular, the melt pool area and the melt pool center offset are highly relevant variables. Therefore, in order to apply tabular-data AI/ML models to implement this kind of anomaly detection in the manufacturing process, it was required to extract from that tomography image the corresponding melt pool area and melt pool centroid coordinates.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Regarding the technical implementation of this algorithm, it must be taken into account that the format expected for the images to be processed is bytearray, since they are collected from an OPC-UA server. Due to this, before processing the information retrieved from the OPC-UA server, the images need to be converted to a format such that the OpenCV library can detect and extract the right mask from it. Currently this is performed by using the Image library, reading the image in RGBA and converting it to a numpy array afterwards.&lt;br /&gt;
&lt;br /&gt;
Described in a simplified form, the image processing algorithm follows the following steps:&lt;br /&gt;
1.	Conversion of the image to grayscale.&lt;br /&gt;
2.	Enhancement of the area definition by using a GaussianBlur function.&lt;br /&gt;
3.	Detection of the ROI (region of interest) by defining the limit values from which each pixel will be considered as non-background.&lt;br /&gt;
4.	Extraction of the area by checking the number of available white pixels. This is made by adding all the values, as the image only has two values: black (0) and white (255). After this, the obtained result is divided by 255 so the result is in pixels.&lt;br /&gt;
5.	Finally, the centroid of the melt pool shape is computed, obtaining the Cx and Cy coordinates.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The usage of this tool is detailed in the corresponding GitHub repository. However, for now the license applicable to this asset is Proprietary, so the code has not been publicly published.&lt;br /&gt;
&lt;br /&gt;
The algorithm is prepared to be deployed as a Docker container, reading the input image data from a RabbitMQ queue and publishing the melt pool area and melt pool center coordinates in another.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
PoC ready, Ongoing Development &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Proprietary&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact with mmarquez@gradiant.org.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;The algorithm was created in the framework of the AI REDGIO 5.0 project. It will be used in the DF XIV experiment (AI at the Edge for real-time monitoring of an additive manufacturing cell), which is being developed by Gradiant&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Jupyter Notebook]][[Category:Docker Container]][[Category:Quality Control]][[Category:Edge AI]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:Manufacturing_tomography.png&amp;diff=551</id>
		<title>File:Manufacturing tomography.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:Manufacturing_tomography.png&amp;diff=551"/>
		<updated>2025-10-20T12:38:32Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=550</id>
		<title>AI Reference Implementations</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=550"/>
		<updated>2025-10-20T12:33:51Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A compilation of tools and resources that can help you kickstart your Industry 5.0 journey.&lt;br /&gt;
&lt;br /&gt;
== Quality Control in Industry with CV and TinyML &#039;&#039;by ExpertAI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Quality_Control_in_Industry_with_CV_and_TinyML Quality Control in Industry with CV and TinyML]&lt;br /&gt;
&lt;br /&gt;
== FPGA Device Plugin &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=FPGA_Device_Plugin FPGA Device Plugin for Kubernetes]&lt;br /&gt;
&lt;br /&gt;
== XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=XuILVQ:_A_River_Implementation_of_the_Incremental_Learning_Vector_Quantization_for_IoT XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis Dashboard &#039;&#039;by SCCH&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard Data Analysis Dashboard]&lt;br /&gt;
&lt;br /&gt;
== Energy Usage Dataset and Model &#039;&#039;by PBN&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Energy_Usage_Dataset Energy Usage Dataset and Model]&lt;br /&gt;
&lt;br /&gt;
== Fabric Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fabrics_Defect_Dataset Fabric Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== Fashion Product Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fashion_Product_Defects_Dataset Fashion Product Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== SmartSpot IoT Device &#039;&#039;by Libelium / HOPU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=SmartSpot_IoT_Device SmartSpot IoT Device]&lt;br /&gt;
&lt;br /&gt;
== Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Labelled_Force/Torque_Time_Series_from_Robotic_Wheel_Assembly_Dataset Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset]&lt;br /&gt;
&lt;br /&gt;
== Anomaly Detection in Force/Torque Time Series From Delta Robot &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Anomaly_Detection_in_Force/Torque_Time_Series_From_Delta_Robot Anomaly Detection in Force/Torque Time Series From Delta Robot]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File Orthogonal Views Extractor from STEP File]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File - PythonOCC Version &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File_-_PythonOCC_Version Orthogonal Views Extractor from STEP File - PythonOCC Version]&lt;br /&gt;
&lt;br /&gt;
== Image Colour Count wih Python &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Image_Colour_Count_with_Python Orthogonal Image Colour Count with Python]&lt;br /&gt;
&lt;br /&gt;
== Bearing Fault Datasets &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Bearing_Fault_Datasets Bearing Fault Datasets]&lt;br /&gt;
&lt;br /&gt;
== D2P Toolbox &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=D2P_Toolbox D2P Toolbox]&lt;br /&gt;
&lt;br /&gt;
== Pneumatic Pressure and Electrical Current Time-Series &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Pneumatic_Pressure_and_Electrical_Current_Time_Series Pneumatic Pressure and Electrical Current Time-Series]&lt;br /&gt;
&lt;br /&gt;
== Error In Alignment (ERAL) Algorithm &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Error_In_Alignment_(ERAL)_Algorithm Error In Alignment (ERAL) Algorithm]&lt;br /&gt;
&lt;br /&gt;
== Self-signed Certificates Generation Tool &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Self-signed_Certificates_Generation_Tool Self-signed Certificates Generation Tool]&lt;br /&gt;
&lt;br /&gt;
== Acceleration Datasets &#039;&#039;by ACTEMA&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets Acceleration Datasets]&lt;br /&gt;
&lt;br /&gt;
== Cream Cheese Production and Quality Dataset &#039;&#039;by GRADIANT and Quescrem&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset Cream Cheese Production and Quality Dataset]&lt;br /&gt;
&lt;br /&gt;
== Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3) &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3) Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=549</id>
		<title>AI Reference Implementations</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=549"/>
		<updated>2025-10-20T12:33:35Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A compilation of tools and resources that can help you kickstart your Industry 5.0 journey.&lt;br /&gt;
&lt;br /&gt;
== Quality Control in Industry with CV and TinyML &#039;&#039;by ExpertAI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Quality_Control_in_Industry_with_CV_and_TinyML Quality Control in Industry with CV and TinyML]&lt;br /&gt;
&lt;br /&gt;
== FPGA Device Plugin &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=FPGA_Device_Plugin FPGA Device Plugin for Kubernetes]&lt;br /&gt;
&lt;br /&gt;
== XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=XuILVQ:_A_River_Implementation_of_the_Incremental_Learning_Vector_Quantization_for_IoT XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis Dashboard &#039;&#039;by SCCH&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard Data Analysis Dashboard]&lt;br /&gt;
&lt;br /&gt;
== Energy Usage Dataset and Model &#039;&#039;by PBN&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Energy_Usage_Dataset Energy Usage Dataset and Model]&lt;br /&gt;
&lt;br /&gt;
== Fabric Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fabrics_Defect_Dataset Fabric Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== Fashion Product Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fashion_Product_Defects_Dataset Fashion Product Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== SmartSpot IoT Device &#039;&#039;by Libelium / HOPU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=SmartSpot_IoT_Device SmartSpot IoT Device]&lt;br /&gt;
&lt;br /&gt;
== Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Labelled_Force/Torque_Time_Series_from_Robotic_Wheel_Assembly_Dataset Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset]&lt;br /&gt;
&lt;br /&gt;
== Anomaly Detection in Force/Torque Time Series From Delta Robot &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Anomaly_Detection_in_Force/Torque_Time_Series_From_Delta_Robot Anomaly Detection in Force/Torque Time Series From Delta Robot]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File Orthogonal Views Extractor from STEP File]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File - PythonOCC Version &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File_-_PythonOCC_Version Orthogonal Views Extractor from STEP File - PythonOCC Version]&lt;br /&gt;
&lt;br /&gt;
== Image Colour Count wih Python &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Image_Colour_Count_with_Python Orthogonal Image Colour Count with Python]&lt;br /&gt;
&lt;br /&gt;
== Bearing Fault Datasets &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Bearing_Fault_Datasets Bearing Fault Datasets]&lt;br /&gt;
&lt;br /&gt;
== D2P Toolbox &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=D2P_Toolbox D2P Toolbox]&lt;br /&gt;
&lt;br /&gt;
== Pneumatic Pressure and Electrical Current Time-Series &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Pneumatic_Pressure_and_Electrical_Current_Time_Series Pneumatic Pressure and Electrical Current Time-Series]&lt;br /&gt;
&lt;br /&gt;
== Error In Alignment (ERAL) Algorithm &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Error_In_Alignment_(ERAL)_Algorithm Error In Alignment (ERAL) Algorithm]&lt;br /&gt;
&lt;br /&gt;
== Self-signed Certificates Generation Tool &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Self-signed_Certificates_Generation_Tool Self-signed Certificates Generation Tool]&lt;br /&gt;
&lt;br /&gt;
== Acceleration Datasets &#039;&#039;by ACTEMA&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets Acceleration Datasets]&lt;br /&gt;
&lt;br /&gt;
== Cream Cheese Production and Quality Dataset &#039;&#039;by GRADIANT and Quescrem&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset Cream Cheese Production and Quality Dataset]&lt;br /&gt;
&lt;br /&gt;
== Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)&#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3) Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=548</id>
		<title>Tabular-data Rehearsal-based Incremental Lifelong Learning Framework (TRIL3)</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Tabular-data_Rehearsal-based_Incremental_Lifelong_Learning_Framework_(TRIL3)&amp;diff=548"/>
		<updated>2025-10-20T12:32:09Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: Created page with &amp;quot;&amp;lt;strong&amp;gt;Continual Learning paradigm avoiding Catastrophic Forgetting for generic tabular-data problems. &amp;lt;/strong&amp;gt; &amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: TRIL3 architecture and data flow.&amp;lt;/div&amp;gt;  == Asset Description == &amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt; New methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Continual Learning paradigm avoiding Catastrophic Forgetting for generic tabular-data problems. &amp;lt;/strong&amp;gt; [[File:Tril3 architecture.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: TRIL3 architecture and data flow.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
New methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Continual learning (CL) poses the important challenge of adapting to evolving data distributions without forgetting previously acquired knowledge while consolidating new knowledge. In this paper, we introduce a new methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems. TRIL3 uses the prototype-based incremental generative model XuILVQ to generate synthetic data to preserve old knowledge and the DNDF algorithm, which was modified to run in an incremental way, to learn classification tasks for tabular data, without storing old samples. After different tests to obtain the adequate percentage of synthetic data and to compare TRIL3 with other CL available proposals, we can conclude that the performance of TRIL3 outstands other options in the literature using only 50% of synthetic data.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The publication is available in the [https://doi.org/10.48550/arXiv.2407.09039 link]. Please use the reference below:&lt;br /&gt;
&#039;&#039;García-Santaclara, Pablo &amp;amp; Fernández-Castro, Bruno &amp;amp; Redondo, Rebeca. (2024). Overcoming Catastrophic Forgetting in Tabular Data Classification: A Pseudorehearsal-based approach. 10.48550/arXiv.2407.09039&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Not available &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Not available&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The publication is available [-	https://doi.org/10.48550/arXiv.2407.09039 here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact mmarquez@gradiant.org .&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Part of the associated work to this paper was carried out by Gradiant under the AI REDGIO 5.0 project initiative.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Theoretical Foundations]][[Category:Edge AI]][[Category:Machine Learning]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:Tril3_architecture.png&amp;diff=547</id>
		<title>File:Tril3 architecture.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:Tril3_architecture.png&amp;diff=547"/>
		<updated>2025-10-20T12:24:26Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=546</id>
		<title>AI Reference Implementations</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_Reference_Implementations&amp;diff=546"/>
		<updated>2025-10-20T12:19:00Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A compilation of tools and resources that can help you kickstart your Industry 5.0 journey.&lt;br /&gt;
&lt;br /&gt;
== Quality Control in Industry with CV and TinyML &#039;&#039;by ExpertAI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Quality_Control_in_Industry_with_CV_and_TinyML Quality Control in Industry with CV and TinyML]&lt;br /&gt;
&lt;br /&gt;
== FPGA Device Plugin &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=FPGA_Device_Plugin FPGA Device Plugin for Kubernetes]&lt;br /&gt;
&lt;br /&gt;
== XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT &#039;&#039;by GRADIANT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=XuILVQ:_A_River_Implementation_of_the_Incremental_Learning_Vector_Quantization_for_IoT XuILVQ: A River Implementation of the Incremental Learning Vector Quantization for IoT]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis Dashboard &#039;&#039;by SCCH&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard Data Analysis Dashboard]&lt;br /&gt;
&lt;br /&gt;
== Energy Usage Dataset and Model &#039;&#039;by PBN&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Energy_Usage_Dataset Energy Usage Dataset and Model]&lt;br /&gt;
&lt;br /&gt;
== Fabric Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fabrics_Defect_Dataset Fabric Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== Fashion Product Defects Dataset &#039;&#039;by Katty Fashion&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Fashion_Product_Defects_Dataset Fashion Product Defects Dataset]&lt;br /&gt;
&lt;br /&gt;
== SmartSpot IoT Device &#039;&#039;by Libelium / HOPU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=SmartSpot_IoT_Device SmartSpot IoT Device]&lt;br /&gt;
&lt;br /&gt;
== Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Labelled_Force/Torque_Time_Series_from_Robotic_Wheel_Assembly_Dataset Labelled Force/Torque Time Series from Robotic Wheel Assembly Dataset]&lt;br /&gt;
&lt;br /&gt;
== Anomaly Detection in Force/Torque Time Series From Delta Robot &#039;&#039;by CTU&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Anomaly_Detection_in_Force/Torque_Time_Series_From_Delta_Robot Anomaly Detection in Force/Torque Time Series From Delta Robot]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File Orthogonal Views Extractor from STEP File]&lt;br /&gt;
&lt;br /&gt;
== Orthogonal Views Extractor from STEP File - PythonOCC Version &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Orthogonal_Views_Extractor_from_STEP_File_-_PythonOCC_Version Orthogonal Views Extractor from STEP File - PythonOCC Version]&lt;br /&gt;
&lt;br /&gt;
== Image Colour Count wih Python &#039;&#039;by TXT&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Image_Colour_Count_with_Python Orthogonal Image Colour Count with Python]&lt;br /&gt;
&lt;br /&gt;
== Bearing Fault Datasets &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Bearing_Fault_Datasets Bearing Fault Datasets]&lt;br /&gt;
&lt;br /&gt;
== D2P Toolbox &#039;&#039;by Flanders Make&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=D2P_Toolbox D2P Toolbox]&lt;br /&gt;
&lt;br /&gt;
== Pneumatic Pressure and Electrical Current Time-Series &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Pneumatic_Pressure_and_Electrical_Current_Time_Series Pneumatic Pressure and Electrical Current Time-Series]&lt;br /&gt;
&lt;br /&gt;
== Error In Alignment (ERAL) Algorithm &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Error_In_Alignment_(ERAL)_Algorithm Error In Alignment (ERAL) Algorithm]&lt;br /&gt;
&lt;br /&gt;
== Self-signed Certificates Generation Tool &#039;&#039;by JSI&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Self-signed_Certificates_Generation_Tool Self-signed Certificates Generation Tool]&lt;br /&gt;
&lt;br /&gt;
== Acceleration Datasets &#039;&#039;by ACTEMA&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Acceleration_Datasets Acceleration Datasets]&lt;br /&gt;
&lt;br /&gt;
== Cream Cheese Production and Quality Dataset &#039;&#039;by GRADIANT and Quescrem&#039;&#039; ==&lt;br /&gt;
* [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset Cream Cheese Production and Quality Dataset]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset&amp;diff=545</id>
		<title>Cream Cheese Production and Quality Dataset</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset&amp;diff=545"/>
		<updated>2025-10-20T12:17:25Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Dataset Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dataset containing sensor measurements and quality data from Quescrem&#039;s cream cheese production chain. &amp;lt;/strong&amp;gt; [[File:Quescrem.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: This dataset collects tabular data of Quescrem’s production chain of their main product (cream cheese), including sensor measurements from the production process as well as quality and composition parameters from laboratory analysis.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Dataset generated in order to train and validate the AI/ML models developed in the Pilot III SME-driven experiment lead by Quescrem within the AI REDGIO 5.0: “AI at the Edge for Zero Defect Food Industry and Sustainability Gain”. The dataset collects tabular data of diverse product batches of Quescrem’s main product family, including sensor measurements from the production process (temperature, pressure, flow rate, fermentation times, etc.) as well as quality and composition parameters from laboratory analysis of the raw materials used and intermediate products (protein, fat, dry matter, acidity, etc.). The purpose of the dataset built is to allow the application of advanced data analysis techniques and AI/ML models to provide insights about how the combination of all the previous features affect the main quality indicators of the released product, including the levels of hardness, acidity or pH.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The dataset built has been produced thanks to the design of a method that provides traceability of the same product batch along all the production chain, including all the subprocesses, from the initial raw material mix until the final packaging of the cream cheese product. This means that all data sources managed in Quescrem’s systems and all the information stored in each one of them has been considered, having to link each table with each other to provide the aforementioned traceability. The goal was to, starting from a specific final product batch ID, retrieve all the information (available in each of the data sources) that corresponds to that specific product.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
That method was automated through a Python application, which collects and processes data from all the sources available and merges it into the final dataset, provided in CSV format. Then, taking into account the huge amount of information (features) that were available, it was necessary to identify which of the collected features were actually relevant for the forecasting of the quality KPIs. That is, which ones provide meaningful information about the production process and influence the quality parameters of the final product (prediction targets).&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As a result, the dataset that was used for the training and testing of the AI models for the AI REDGIO 5.0 pilot was built. The number of samples (number of product batches) included are quite limited, due to the complexity and inconsistencies found in the format of the historical data available and in the relationships between data sources. However, the dataset includes and relates variables corresponding to the whole production chain, which is extremely valuable since it is a very complex production process, with many interlinked subprocesses, and no data collection of the overall process was available so far for each product batch.&lt;br /&gt;
&lt;br /&gt;
It should be taken into account that some of the variables included in the dataset come from real-time data streams, since the sensors read one sample of those variables every second for the duration of the corresponding production subprocess. In those cases, the average value is provided. &lt;br /&gt;
&lt;br /&gt;
Examples of some of the variables included in the dataset are the pH of the added cream, the fat and protein percentages of the mix before the pasteurization subprocess, the average pressure during the concentration subprocess, the average temperature of the mix during the pasteurization subprocess, the average pressure of the pasteurization tank, the average viscosity of the mix during the pasteurization subprocess, etc. &lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This is a dataset containing tabular data that can be used for training and testing AI/ML models.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Ongoing development: currently the dataset has around 300 pre-processed samples, annotated with the corresponding quality KPI values for each product batch. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Proprietary&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact danielestrada@quescrem.es.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT] and [https://quescrem.es/en/ Quescrem]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;The dataset was created in the framework of the AI REDGIO 5.0 project. It will be used in the Industrial Pilot III (AI at the Edge for Zero Defect Food Industry and Sustainability Gain), which is being developed by Quescrem and Gradiant.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Dataset]][[Category:Process Optimisation]][[Category:Quality Control]][[Category:Waste Reduction]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset&amp;diff=544</id>
		<title>Cream Cheese Production and Quality Dataset</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Cream_Cheese_Production_and_Quality_Dataset&amp;diff=544"/>
		<updated>2025-10-20T12:16:54Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: Created page with &amp;quot;&amp;lt;strong&amp;gt;Dataset containing sensor measurements and quality data from Quescrem&amp;#039;s cream cheese production chain. &amp;lt;/strong&amp;gt; &amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: This dataset collects tabular data of Quescrem’s production chain of their main product (cream cheese), including sensor measurements from the production process as well as quality and composition parameters from laboratory analysis.&amp;lt;/div&amp;gt;  == Asset Description =...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dataset containing sensor measurements and quality data from Quescrem&#039;s cream cheese production chain. &amp;lt;/strong&amp;gt; [[File:Quescrem.png|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: This dataset collects tabular data of Quescrem’s production chain of their main product (cream cheese), including sensor measurements from the production process as well as quality and composition parameters from laboratory analysis.&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Dataset generated in order to train and validate the AI/ML models developed in the Pilot III SME-driven experiment lead by Quescrem within the AI REDGIO 5.0: “AI at the Edge for Zero Defect Food Industry and Sustainability Gain”. The dataset collects tabular data of diverse product batches of Quescrem’s main product family, including sensor measurements from the production process (temperature, pressure, flow rate, fermentation times, etc.) as well as quality and composition parameters from laboratory analysis of the raw materials used and intermediate products (protein, fat, dry matter, acidity, etc.). The purpose of the dataset built is to allow the application of advanced data analysis techniques and AI/ML models to provide insights about how the combination of all the previous features affect the main quality indicators of the released product, including the levels of hardness, acidity or pH.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Asset Details ==&lt;br /&gt;
=== Dataset Information ===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The dataset built has been produced thanks to the design of a method that provides traceability of the same product batch along all the production chain, including all the subprocesses, from the initial raw material mix until the final packaging of the cream cheese product. This means that all data sources managed in Quescrem’s systems and all the information stored in each one of them has been considered, having to link each table with each other to provide the aforementioned traceability. The goal was to, starting from a specific final product batch ID, retrieve all the information (available in each of the data sources) that corresponds to that specific product.&lt;br /&gt;
&lt;br /&gt;
That method was automated through a Python application, which collects and processes data from all the sources available and merges it into the final dataset, provided in CSV format. Then, taking into account the huge amount of information (features) that were available, it was necessary to identify which of the collected features were actually relevant for the forecasting of the quality KPIs. That is, which ones provide meaningful information about the production process and influence the quality parameters of the final product (prediction targets).&lt;br /&gt;
&lt;br /&gt;
As a result, the dataset that was used for the training and testing of the AI models for the AI REDGIO 5.0 pilot was built. The number of samples (number of product batches) included are quite limited, due to the complexity and inconsistencies found in the format of the historical data available and in the relationships between data sources. However, the dataset includes and relates variables corresponding to the whole production chain, which is extremely valuable since it is a very complex production process, with many interlinked subprocesses, and no data collection of the overall process was available so far for each product batch.&lt;br /&gt;
&lt;br /&gt;
It should be taken into account that some of the variables included in the dataset come from real-time data streams, since the sensors read one sample of those variables every second for the duration of the corresponding production subprocess. In those cases, the average value is provided. &lt;br /&gt;
&lt;br /&gt;
Examples of some of the variables included in the dataset are the pH of the added cream, the fat and protein percentages of the mix before the pasteurization subprocess, the average pressure during the concentration subprocess, the average temperature of the mix during the pasteurization subprocess, the average pressure of the pasteurization tank, the average viscosity of the mix during the pasteurization subprocess, etc. &lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This is a dataset containing tabular data that can be used for training and testing AI/ML models.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maturity===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Ongoing development: currently the dataset has around 300 pre-processed samples, annotated with the corresponding quality KPI values for each product batch. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Proprietary&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For further information, please contact danielestrada@quescrem.es.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Provided by [https://gradiant.org/en/ GRADIANT] and [https://quescrem.es/en/ Quescrem]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;The dataset was created in the framework of the AI REDGIO 5.0 project. It will be used in the Industrial Pilot III (AI at the Edge for Zero Defect Food Industry and Sustainability Gain), which is being developed by Quescrem and Gradiant.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Dataset]][[Category:Process Optimisation]][[Category:Quality Control]][[Category:Waste Reduction]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:Quescrem.png&amp;diff=543</id>
		<title>File:Quescrem.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:Quescrem.png&amp;diff=543"/>
		<updated>2025-10-20T12:08:04Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=542</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=542"/>
		<updated>2025-10-20T09:17:41Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Key Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 1: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:4.visualisation.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Data Visualisation&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5. sparql.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: SPARQL Querying&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
*	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
*	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
*	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
*	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
*	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
*	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
*	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
*	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
*	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
*	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=541</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=541"/>
		<updated>2025-10-20T09:17:24Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Key Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 1: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:4.visualisation.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Data Visualisation&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:5. sparql.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: SPARQL Querying&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
*	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
*	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
*	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
*	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
*	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
*	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
*	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
*	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
*	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
*	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=540</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=540"/>
		<updated>2025-10-20T09:16:43Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 1: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:4.visualisation.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Data Visualisation&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:5. sparql.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: SPARQL Querying&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
*	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
*	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
*	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
*	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
*	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
*	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
*	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
*	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
*	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
*	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=539</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=539"/>
		<updated>2025-10-20T09:16:25Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 1: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:4.visualisation.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Data Visualisation&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:5. sparql.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: SPARQL Querying&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
*	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
*	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
*	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
*	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
*	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
*	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
*	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
*	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
*	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
*	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=538</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=538"/>
		<updated>2025-10-20T09:15:55Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 1: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:4.visualisation.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Data Visualisation&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:5. sparql.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: SPARQL Querying&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&amp;lt;/br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
*	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
*	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
*	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
*	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
*	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
*	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
*	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
*	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
*	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
*	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=537</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=537"/>
		<updated>2025-10-20T09:15:17Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 1: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:4.visualisation.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Data Visualisation&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:5. sparql.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: SPARQL Querying&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
*	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
*	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
*	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
*	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
*	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
*	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
*	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
*	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
*	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
*	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:5._sparql.png&amp;diff=536</id>
		<title>File:5. sparql.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:5._sparql.png&amp;diff=536"/>
		<updated>2025-10-20T09:14:32Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:4.visualisation.png&amp;diff=535</id>
		<title>File:4.visualisation.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:4.visualisation.png&amp;diff=535"/>
		<updated>2025-10-20T09:11:44Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=534</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=534"/>
		<updated>2025-10-20T05:42:24Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
•	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
•	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
•	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
•	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
•	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
•	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
•	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
•	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
•	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
•	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=533</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=533"/>
		<updated>2025-10-20T05:41:57Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
•	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
•	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
•	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
•	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
•	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
•	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
•	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
•	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
•	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
•	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=532</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=532"/>
		<updated>2025-10-20T05:41:28Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Key Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
[[File:2.correlation analysis.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Correlation Analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
[[File:3.inference rules.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Inference Rules&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
•	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
•	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
•	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
•	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
•	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
•	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
•	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
•	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
•	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
•	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:3.inference_rules.png&amp;diff=531</id>
		<title>File:3.inference rules.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:3.inference_rules.png&amp;diff=531"/>
		<updated>2025-10-20T05:41:00Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:2.correlation_analysis.png&amp;diff=530</id>
		<title>File:2.correlation analysis.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:2.correlation_analysis.png&amp;diff=530"/>
		<updated>2025-10-20T05:39:51Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=529</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=529"/>
		<updated>2025-10-20T05:39:32Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Key Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
[[File:1.data preprocessing.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Data Preprocessing&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
•	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
•	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
•	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
•	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
•	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
•	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
•	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
•	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
•	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
•	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=528</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=528"/>
		<updated>2025-10-20T05:39:21Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
1.	Upload a Dataset:&lt;br /&gt;
Begin by uploading a dataset in .csv format. This serves as the primary input for analysis. The system supports datasets of varying sizes, ensuring flexibility for small exploratory analyses or large-scale data studies.&lt;br /&gt;
&lt;br /&gt;
2.	Select Columns for Analysis:&lt;br /&gt;
Choose specific pairs or groups of columns from the dataset to focus on correlation analysis. This step allows users to isolate meaningful relationships and ensures the analysis remains targeted to the relevant data dimensions.&lt;br /&gt;
&lt;br /&gt;
3.	Set Thresholds for Similarity Metrics:&lt;br /&gt;
Fine-tune the thresholds for multiple similarity measures using an intuitive sidebar interface:&lt;br /&gt;
•	Pearson Correlation: Adjust the threshold to measure linear relationships between variables.&lt;br /&gt;
•	Spearman Correlation: Set a rank-based similarity threshold to capture monotonic relationships.&lt;br /&gt;
•	Euclidean Similarity: Define distance-based thresholds for evaluating proximity between data points.&lt;br /&gt;
This customization allows users to align the analysis with the needs of their domain or study.&lt;br /&gt;
&lt;br /&gt;
4.	Analyze Data and Visualize Results:&lt;br /&gt;
Gain deeper insights into your data with interactive visual tools. For example:&lt;br /&gt;
•	Access detailed visual representations of correlation metrics, including heatmaps and scatter plots. These visuals highlight patterns and relationships in the data, making it easier to interpret findings.&lt;br /&gt;
•	Explore an interactive knowledge graph that visually maps correlations and relationships between selected data columns, enabling intuitive navigation of complex connections.&lt;br /&gt;
&lt;br /&gt;
5.	Apply Inference Rules:&lt;br /&gt;
Enhance the knowledge graph by introducing domain-specific logic:&lt;br /&gt;
•	Enter custom rules in an IF-THEN format (e.g., “IF variable A &amp;gt; threshold, THEN infer relationship B”).&lt;br /&gt;
•	This step enables users to derive new relationships and enrich the dataset with inferred knowledge, tailoring the analysis to their specific research goals.&lt;br /&gt;
&lt;br /&gt;
6.	Execute SPARQL Queries on the RDF Graph:&lt;br /&gt;
Use the SPARQL interface to go deeper into the knowledge graph, execute advanced queries, and reveal hidden relationships and insights within the data:&lt;br /&gt;
•	Dive deeper into the generated knowledge graph using the SPARQL query interface.&lt;br /&gt;
•	Perform complex queries to explore relationships, extract specific subsets of data, or validate inferred connections.&lt;br /&gt;
•	This feature integrates the power of semantic querying, enabling users to uncover insights that are not immediately apparent in the raw dataset.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:1.data_preprocessing.png&amp;diff=527</id>
		<title>File:1.data preprocessing.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=File:1.data_preprocessing.png&amp;diff=527"/>
		<updated>2025-10-20T05:37:34Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=526</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=526"/>
		<updated>2025-10-20T05:36:41Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Data Upload and Preprocessing:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Correlation Analysis:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Inference Rules:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;Visualization:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;SPARQL Querying:&amp;lt;/strong&amp;gt;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This software helps users analyse datasets and uncover hidden relationships.&lt;br /&gt;
Initially the user should perform the following: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 2.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Statistical analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=525</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=525"/>
		<updated>2025-10-20T05:35:33Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Key Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;Data Upload and Preprocessing:&#039;&lt;br /&gt;
*Upload datasets via file input or URL.&lt;br /&gt;
*Manage missing values using various imputation methods.&lt;br /&gt;
*Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
&#039;Correlation Analysis:&#039;&lt;br /&gt;
*Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
*User-defined thresholds for filtering significant relationships.&lt;br /&gt;
*Knowledge Graph Creation:&lt;br /&gt;
*Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
*Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
&#039;Inference Rules:&#039;&lt;br /&gt;
*Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
&#039;Visualization:&#039;&lt;br /&gt;
*Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
*Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;SPARQL Querying:&#039;&lt;br /&gt;
*Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This software helps users analyse datasets and uncover hidden relationships.&lt;br /&gt;
Initially the user should perform the following: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 2.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Statistical analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=524</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=524"/>
		<updated>2025-10-20T05:33:48Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Features ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Data Upload and Preprocessing:&#039;&#039;&lt;br /&gt;
•	Upload datasets via file input or URL.&lt;br /&gt;
•	Manage missing values using various imputation methods.&lt;br /&gt;
•	Encode categorical variables and coerce numeric data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Correlation Analysis:&#039;&#039;&lt;br /&gt;
•	Compute Pearson and Spearman correlations, as well as Euclidean similarity.&lt;br /&gt;
•	User-defined thresholds for filtering significant relationships.&lt;br /&gt;
•	Knowledge Graph Creation:&lt;br /&gt;
•	Automatically generate RDF graphs representing significant correlations.&lt;br /&gt;
•	Define relationships using user-specified thresholds.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Inference Rules:&#039;&#039;&lt;br /&gt;
•	Input custom IF-THEN rules to add inferred relationships to the RDF graph.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Visualization:&lt;br /&gt;
•	Visualize correlations using interactive Plotly subplots.&lt;br /&gt;
•	Display the knowledge graph as a network with customizable aesthetics.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;SPARQL Querying:&#039;&#039;&lt;br /&gt;
•	Query the RDF graph using SPARQL with a user-friendly interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This software helps users analyse datasets and uncover hidden relationships.&lt;br /&gt;
Initially the user should perform the following: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 2.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Statistical analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=523</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=523"/>
		<updated>2025-10-07T15:00:14Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: /* Licence */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This software helps users analyse datasets and uncover hidden relationships.&lt;br /&gt;
Initially the user should perform the following: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 2.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Statistical analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
This project is licensed under the MIT License.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=522</id>
		<title>Data Analysis Dashboard</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard&amp;diff=522"/>
		<updated>2025-10-07T14:58:59Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Dashboard that allows human operators monitor and extract knowledge from tabular data through visualization/interpretability, querying, and inference features.&amp;lt;/strong&amp;gt; [[File:Data analysis dash 1.jpg|thumb|right|&amp;lt;div style=&amp;quot;font-size:88%;line-height: 1.5em&amp;quot;&amp;gt;Image 1: The main view of the Data Analysis Dashboard&amp;lt;/div&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Data Analysis Dashboard facilitates the comprehensive monitoring of tabular data (e.g., from sensor arrays) with sophisticated processing and analysis capabilities. Its key benefits include intuitive visualisation, flexible querying, and the ability to infer patterns and detect anomalies, giving human operators critical decision-making support. It uses some classical Data and Knowledge Engineering methods (e.g., Knowledge Graphs) and is implemented based on the [https://streamlit.io/ Streamlit] framework.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
This software helps users analyse datasets and uncover hidden relationships.&lt;br /&gt;
Initially the user should perform the following: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Upload a CSV file containing the data to be analysed&lt;br /&gt;
* Configure the data preprocessing aspects (handling missing values and data types) &lt;br /&gt;
&lt;br /&gt;
Afterwards the user can explore their data and underlying relations , extracting knowledge in a flexible way:&lt;br /&gt;
* Select pairs of features and see the visualisations of correlation analysis&lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 2.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Statistical analysis&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Build a knowledge graph that represents significant relationships in the data &lt;br /&gt;
* Interact directly with the knowledge graph through a SPARQL query interface &lt;br /&gt;
&lt;br /&gt;
[[File:Data analysis dash 3.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Visualisation, querying and inference&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;Note: The asset is under Ongoing Development&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Licence===&lt;br /&gt;
Restricted&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A live demo of this component can be found [https://airedgio-dashboard.streamlit.app/ here]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The source code is available at the following [https://github.com/AI-REDGIO-5-0/data-dashboard/ GitHub repository]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgement====&lt;br /&gt;
&amp;lt;p style=&amp;quot;font-size:90%;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;This tool has been mainly developed in the frame of the project [https://trineflex.eu/ TrineFlex] from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101058174.&#039;&#039;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Software-as-a-Service]][[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Preventive Maintenance]][[Category:Process Optimisation]][[Category:Generic Purpose]][[Category:Cloud-based]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Collaborative_Intelligence_Platform&amp;diff=521</id>
		<title>AI REDGIO 5.0 Collaborative Intelligence Platform</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Collaborative_Intelligence_Platform&amp;diff=521"/>
		<updated>2025-10-07T14:55:29Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
&amp;lt;strong&amp;gt;Facilitating Human-AI collaboration through cutting-edge AI capabilities&amp;lt;/strong&amp;gt; [[File:Step 1.png|thumb|right|Image 1: The AI REDGIO 5.0 Collaborative Intelligence Platform]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The AI REDGIO 5.0 Collaborative Intelligence Platform is a solution at the forefront of Industry 5.0. The idea is to combine various technological advancements to redefine industrial landscapes. The platform facilitates Human-AI collaboration by integrating cutting-edge AI capabilities. In this way, the platform is intended to illustrate the potential of connected devices, sensors, and machines through real-time data fusion and analysis, driving optimal decision-making and resource allocation.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto; width:100%; line-height: 1.5em&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Feature !! Description&lt;br /&gt;
|-&lt;br /&gt;
| Integration of Advanced Technologies|| Integration of cutting-edge technologies, including AI-driven analytics, and Internet of Things (IoT)-enabled devices to create a synergistic ecosystem&lt;br /&gt;
|-&lt;br /&gt;
| Human-Machine Interaction|| Facilitating interaction between human operators and machines&lt;br /&gt;
|-&lt;br /&gt;
| Support of Real-time Operations|| The platform works in tandem with real-time data analytics on IoT data&lt;br /&gt;
|-&lt;br /&gt;
| Continuous Learning and Knowledge Management|| Storage in a knowledge-sharing and learning management repository&lt;br /&gt;
|-&lt;br /&gt;
| Collaborative Innovation || Fostering a culture of collaborative innovation, where human operators can interact with AI for improved processes and final product&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== User Journey ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Our platform adopts a practical problem-solving approach, data analysis, and process optimisation. This subsection offers a comprehensive usage walkthrough, explaining the interaction of the user with the Collaborative Intelligence Platform as a whole, and as a part of the user&#039;s MLOps operations and tools they already use.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
[[File:Collaborative Intelligence Platfrom for Industry 5.0 User Journey.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Collaborative Intelligence Platform for Industry 5.0 - User Journey&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
# &#039;&#039;&#039; Onboarding and User Authentication&#039;&#039;&#039;: Ensure secure access to the system, safeguarding sensitive data and mitigating potential security breaches&lt;br /&gt;
# &#039;&#039;&#039;Integrating IoT and Smart Devices&#039;&#039;&#039;: The integration of IoT and smart devices using a methodology for the interconnection of heterogeneous devices&lt;br /&gt;
# &#039;&#039;&#039;AI-driven Data Analytics and Insights&#039;&#039;&#039;: The system can start working to detect patterns, anomalies, and trends within the data through ML algorithms&lt;br /&gt;
# &#039;&#039;&#039;Human-Machine Interaction and Augmentation&#039;&#039;&#039;: Coordination of actions between human operators and machines, emphasising augmentation rather than substitution&lt;br /&gt;
# &#039;&#039;&#039;Real-Time Process Optimisation&#039;&#039;&#039;: Automatically adjusting operational parameters in response to changing conditions and input provided by the user through the [[#Collaborative Intelligence Component]]&lt;br /&gt;
# &#039;&#039;&#039;Knowledge Base and Learning Management&#039;&#039;&#039;: Exploring the knowledge repository containing organisational know-how, accumulated insights, and best practices&lt;br /&gt;
# &#039;&#039;&#039;Collaborative Innovation and Continuous Improvement&#039;&#039;&#039;: The ultimate goal of Collaborative Intelligence: facilitating cross-functional collaboration and iterative enhancement through data analytics, human-machine collaboration, and organisational knowledge&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Collaborative Intelligence Platform Components==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The AI REDGIO 5.0 Collaborative Intelligence Platform is comprised of 3 underlying components.  &lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto; width:100%; line-height: 1.5em&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Component!! Description!! Type!! Status&lt;br /&gt;
|-&lt;br /&gt;
| Collaborative Intelligence Component ||Allows the human operator to accept or reject a manufactured product. This component has an interface that can be configured around various metrics including, but not limited to, accuracy, interpretability, speed, etc ||User-facing ||Working Prototype &lt;br /&gt;
|-&lt;br /&gt;
| Pipeline Creation Component ||Easily create, configure, and deploy workflows in various work environments. This component acts as the backbone of a robust and efficient workflow management system, enabling organisations to optimise their processes and achieve higher productivity ||User-facing ||Work in Progress&lt;br /&gt;
|-&lt;br /&gt;
| Interfacing Component ||Connects the operator interface with hardware and housing pre-trained models ready for deployment in manufacturing environments. This component acts as the bridge between the digital and physical realms of experiments ||Backend ||Working Prototype&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Usage Walkthroughs===&lt;br /&gt;
How to use the AI REDGIO 5.0 Collaborative Intelligence Platform&lt;br /&gt;
==== Collaborative Intelligence Component====&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The human operator can assess the results of the AI/ML models and provide feedback with regards to 3 aspects (accuracy, energy-efficiency, latency) through the Collaborative Intelligence (C.I) component. The human operator&#039;s feedback is propagated back to the AI/ML analytics tool of the user to perform the relevant adjustments to the model configuration for results optimisation.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;1. Entering the C.I Platform&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Navigate to the url https://github.com/AI-REDGIO-5-0/ci-component&lt;br /&gt;
* Enter your credentials &#039;&#039;(Note: User registration and account creation is managed by [https://www.scch.at/ Software Competence Centre Hagenberg - SCCH])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;2. Data and ML/AI results inspection &amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After successfully entering the platform, you are led to the C.I Dashboard, where you can review your data and provide feedback.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Upload Data&#039;&#039;&#039;: First you need to provide the data that you want to inspect. &#039;&#039;(Note: Data are manually onboarded. In upcoming releases, the onboarding will happen through integration with the user&#039;s AI/ML tools and system.)&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:00 - Upload Data.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Upload data&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;  &lt;br /&gt;
* &#039;&#039;&#039;Select ML/AI evaluation axis&#039;&#039;&#039;: ML/AI models can be evaluated based on various parameters. Click one of the available radio buttons (accuracy, energy-efficiency, latency) to instantiate the Table and the charts with the corresponding data and provide your feedback&lt;br /&gt;
* &#039;&#039;&#039;View the data&#039;&#039;&#039;: The Table is loaded with the data from the integrated dataset (containing the ML/AI results). You can select in a dropdown the ML/AI output feature that you would like to inspect.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:01 - Select feature.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Selection of feature to be inspected&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;3. Human provides feedback to AI&amp;lt;/strong&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Provide feedback&#039;&#039;&#039;: Check the value of the output feature per row and provide feedback on the ML/AI performance with regards to the evaluation axis you have selected. You provide feedback in the form of Yes (i.e., the ML/AI model for the specific output feature value was accurate/energy-efficient/fast)  or No  (i.e. the ML/AI model for the specific output feature value was not accurate/not energy-efficient/slow)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:02 - Provision of feedback.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: Provision of feedback on the ML/AI model from the aspect of correctness&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Check the evaluation charts&#039;&#039;&#039;: The bar charts that appear on the top of the C.I Dashboard provide an overview of the ML/AI evaluation by the human operator so far&lt;br /&gt;
* &#039;&#039;&#039;Export findings&#039;&#039;&#039;: You can click on the option &#039;Print Non-OK Rows to a File&#039; to export the rows marked as non-ok and further use them afterwards&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:03 - export non ok rows.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 6: Export non-ok rows&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;4. AI is adjusted according to Human Feedback&amp;lt;/strong&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:75%&amp;quot;&amp;gt;&#039;&#039;Note: Will be part of upcoming releases&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interplay with Input Analysis Tool===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
To further assist the data scientists and other users in making sense of their data, thus enhancing human-AI collaboration, the AI REDGIO 5.0 Collaborative Intelligence Platform works in tandem with the Input Analysis tool.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
==== How it works====&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The human operator can explore through the [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard Data Analysis Dashboard] their data and perform input analysis based on user-selected aspects and thresholds to find correlations between features.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;1. Upload input data to C.I Dashboard and navigate to the Data Analysis Dashboard &amp;lt;/strong&amp;gt;&lt;br /&gt;
* Go to the main Dashboard page of the C.I Dashboard&lt;br /&gt;
* Click &#039;Choose file&#039; and select the input data file from the directory&lt;br /&gt;
* Click &#039;Upload csv&#039; to onboard your input data&lt;br /&gt;
* Click the &#039;Input Analysis&#039; hyperlink to be led to the relevant tool to inspect your data&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step 1.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 7: Link to Data Analysis Dashboard from C.I Dashboard&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;2. Perform input analysis&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After clicking on the hyperlink, you are led to the Data Analysis Dashboard. Through the integration with the C.I Platform, the onboarded input data are available there in order to perform your exploration.&lt;br /&gt;
&lt;br /&gt;
* Select Data Analysis aspects: Through the Data Analysis Dashboard you can select the analysis aspects, including the pairs of columns to compare, the missing value imputation method, the numeric conversion method. Additionally, you can set the thresholds for the analysis (Pearson correlation, Spearman rank correlation, Euclidean similarity) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Input analysis 001.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 8: Selection of data analysis aspects&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;  &lt;br /&gt;
* Review the analysis results: In the provided graphs you can see the visualised results of the analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Input analysis 002.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 9: Inspect analysis visualisations&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/ci-component Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
	<entry>
		<id>https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Collaborative_Intelligence_Platform&amp;diff=520</id>
		<title>AI REDGIO 5.0 Collaborative Intelligence Platform</title>
		<link rel="alternate" type="text/html" href="https://wiki.ai-redgio50.s5labs.eu/index.php?title=AI_REDGIO_5.0_Collaborative_Intelligence_Platform&amp;diff=520"/>
		<updated>2025-10-07T14:54:23Z</updated>

		<summary type="html">&lt;p&gt;Admino739mjm7: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
&amp;lt;strong&amp;gt;Facilitating Human-AI collaboration through cutting-edge AI capabilities&amp;lt;/strong&amp;gt; [[File:Step 1.png&lt;br /&gt;
|thumb|right|Image 1: The AI REDGIO 5.0 Collaborative Intelligence Platform]]&lt;br /&gt;
&lt;br /&gt;
== Asset Description ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The AI REDGIO 5.0 Collaborative Intelligence Platform is a solution at the forefront of Industry 5.0. The idea is to combine various technological advancements to redefine industrial landscapes. The platform facilitates Human-AI collaboration by integrating cutting-edge AI capabilities. In this way, the platform is intended to illustrate the potential of connected devices, sensors, and machines through real-time data fusion and analysis, driving optimal decision-making and resource allocation.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto; width:100%; line-height: 1.5em&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Feature !! Description&lt;br /&gt;
|-&lt;br /&gt;
| Integration of Advanced Technologies|| Integration of cutting-edge technologies, including AI-driven analytics, and Internet of Things (IoT)-enabled devices to create a synergistic ecosystem&lt;br /&gt;
|-&lt;br /&gt;
| Human-Machine Interaction|| Facilitating interaction between human operators and machines&lt;br /&gt;
|-&lt;br /&gt;
| Support of Real-time Operations|| The platform works in tandem with real-time data analytics on IoT data&lt;br /&gt;
|-&lt;br /&gt;
| Continuous Learning and Knowledge Management|| Storage in a knowledge-sharing and learning management repository&lt;br /&gt;
|-&lt;br /&gt;
| Collaborative Innovation || Fostering a culture of collaborative innovation, where human operators can interact with AI for improved processes and final product&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== User Journey ==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
Our platform adopts a practical problem-solving approach, data analysis, and process optimisation. This subsection offers a comprehensive usage walkthrough, explaining the interaction of the user with the Collaborative Intelligence Platform as a whole, and as a part of the user&#039;s MLOps operations and tools they already use.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
[[File:Collaborative Intelligence Platfrom for Industry 5.0 User Journey.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 2: Collaborative Intelligence Platform for Industry 5.0 - User Journey&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
# &#039;&#039;&#039; Onboarding and User Authentication&#039;&#039;&#039;: Ensure secure access to the system, safeguarding sensitive data and mitigating potential security breaches&lt;br /&gt;
# &#039;&#039;&#039;Integrating IoT and Smart Devices&#039;&#039;&#039;: The integration of IoT and smart devices using a methodology for the interconnection of heterogeneous devices&lt;br /&gt;
# &#039;&#039;&#039;AI-driven Data Analytics and Insights&#039;&#039;&#039;: The system can start working to detect patterns, anomalies, and trends within the data through ML algorithms&lt;br /&gt;
# &#039;&#039;&#039;Human-Machine Interaction and Augmentation&#039;&#039;&#039;: Coordination of actions between human operators and machines, emphasising augmentation rather than substitution&lt;br /&gt;
# &#039;&#039;&#039;Real-Time Process Optimisation&#039;&#039;&#039;: Automatically adjusting operational parameters in response to changing conditions and input provided by the user through the [[#Collaborative Intelligence Component]]&lt;br /&gt;
# &#039;&#039;&#039;Knowledge Base and Learning Management&#039;&#039;&#039;: Exploring the knowledge repository containing organisational know-how, accumulated insights, and best practices&lt;br /&gt;
# &#039;&#039;&#039;Collaborative Innovation and Continuous Improvement&#039;&#039;&#039;: The ultimate goal of Collaborative Intelligence: facilitating cross-functional collaboration and iterative enhancement through data analytics, human-machine collaboration, and organisational knowledge&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Collaborative Intelligence Platform Components==&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The AI REDGIO 5.0 Collaborative Intelligence Platform is comprised of 3 underlying components.  &lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto; width:100%; line-height: 1.5em&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Component!! Description!! Type!! Status&lt;br /&gt;
|-&lt;br /&gt;
| Collaborative Intelligence Component ||Allows the human operator to accept or reject a manufactured product. This component has an interface that can be configured around various metrics including, but not limited to, accuracy, interpretability, speed, etc ||User-facing ||Working Prototype &lt;br /&gt;
|-&lt;br /&gt;
| Pipeline Creation Component ||Easily create, configure, and deploy workflows in various work environments. This component acts as the backbone of a robust and efficient workflow management system, enabling organisations to optimise their processes and achieve higher productivity ||User-facing ||Work in Progress&lt;br /&gt;
|-&lt;br /&gt;
| Interfacing Component ||Connects the operator interface with hardware and housing pre-trained models ready for deployment in manufacturing environments. This component acts as the bridge between the digital and physical realms of experiments ||Backend ||Working Prototype&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Usage Walkthroughs===&lt;br /&gt;
How to use the AI REDGIO 5.0 Collaborative Intelligence Platform&lt;br /&gt;
==== Collaborative Intelligence Component====&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The human operator can assess the results of the AI/ML models and provide feedback with regards to 3 aspects (accuracy, energy-efficiency, latency) through the Collaborative Intelligence (C.I) component. The human operator&#039;s feedback is propagated back to the AI/ML analytics tool of the user to perform the relevant adjustments to the model configuration for results optimisation.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;1. Entering the C.I Platform&amp;lt;/strong&amp;gt;&lt;br /&gt;
* Navigate to the url https://github.com/AI-REDGIO-5-0/ci-component&lt;br /&gt;
* Enter your credentials &#039;&#039;(Note: User registration and account creation is managed by [https://www.scch.at/ Software Competence Centre Hagenberg - SCCH])&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;2. Data and ML/AI results inspection &amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After successfully entering the platform, you are led to the C.I Dashboard, where you can review your data and provide feedback.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Upload Data&#039;&#039;&#039;: First you need to provide the data that you want to inspect. &#039;&#039;(Note: Data are manually onboarded. In upcoming releases, the onboarding will happen through integration with the user&#039;s AI/ML tools and system.)&#039;&#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:00 - Upload Data.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 3: Upload data&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;  &lt;br /&gt;
* &#039;&#039;&#039;Select ML/AI evaluation axis&#039;&#039;&#039;: ML/AI models can be evaluated based on various parameters. Click one of the available radio buttons (accuracy, energy-efficiency, latency) to instantiate the Table and the charts with the corresponding data and provide your feedback&lt;br /&gt;
* &#039;&#039;&#039;View the data&#039;&#039;&#039;: The Table is loaded with the data from the integrated dataset (containing the ML/AI results). You can select in a dropdown the ML/AI output feature that you would like to inspect.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:01 - Select feature.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 4: Selection of feature to be inspected&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;3. Human provides feedback to AI&amp;lt;/strong&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Provide feedback&#039;&#039;&#039;: Check the value of the output feature per row and provide feedback on the ML/AI performance with regards to the evaluation axis you have selected. You provide feedback in the form of Yes (i.e., the ML/AI model for the specific output feature value was accurate/energy-efficient/fast)  or No  (i.e. the ML/AI model for the specific output feature value was not accurate/not energy-efficient/slow)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:02 - Provision of feedback.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 5: Provision of feedback on the ML/AI model from the aspect of correctness&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Check the evaluation charts&#039;&#039;&#039;: The bar charts that appear on the top of the C.I Dashboard provide an overview of the ML/AI evaluation by the human operator so far&lt;br /&gt;
* &#039;&#039;&#039;Export findings&#039;&#039;&#039;: You can click on the option &#039;Print Non-OK Rows to a File&#039; to export the rows marked as non-ok and further use them afterwards&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:03 - export non ok rows.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 6: Export non-ok rows&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;4. AI is adjusted according to Human Feedback&amp;lt;/strong&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:75%&amp;quot;&amp;gt;&#039;&#039;Note: Will be part of upcoming releases&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interplay with Input Analysis Tool===&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
To further assist the data scientists and other users in making sense of their data, thus enhancing human-AI collaboration, the AI REDGIO 5.0 Collaborative Intelligence Platform works in tandem with the Input Analysis tool.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
==== How it works====&lt;br /&gt;
&amp;lt;p style=&amp;quot;line-height: 1.5em&amp;quot;&amp;gt;&lt;br /&gt;
The human operator can explore through the [https://wiki.ai-redgio50.s5labs.eu/index.php?title=Data_Analysis_Dashboard Data Analysis Dashboard] their data and perform input analysis based on user-selected aspects and thresholds to find correlations between features.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;1. Upload input data to C.I Dashboard and navigate to the Data Analysis Dashboard &amp;lt;/strong&amp;gt;&lt;br /&gt;
* Go to the main Dashboard page of the C.I Dashboard&lt;br /&gt;
* Click &#039;Choose file&#039; and select the input data file from the directory&lt;br /&gt;
* Click &#039;Upload csv&#039; to onboard your input data&lt;br /&gt;
* Click the &#039;Input Analysis&#039; hyperlink to be led to the relevant tool to inspect your data&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Step 1.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 7: Link to Data Analysis Dashboard from C.I Dashboard&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;strong&amp;gt;2. Perform input analysis&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After clicking on the hyperlink, you are led to the Data Analysis Dashboard. Through the integration with the C.I Platform, the onboarded input data are available there in order to perform your exploration.&lt;br /&gt;
&lt;br /&gt;
* Select Data Analysis aspects: Through the Data Analysis Dashboard you can select the analysis aspects, including the pairs of columns to compare, the missing value imputation method, the numeric conversion method. Additionally, you can set the thresholds for the analysis (Pearson correlation, Spearman rank correlation, Euclidean similarity) &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Input analysis 001.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 8: Selection of data analysis aspects&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;  &lt;br /&gt;
* Review the analysis results: In the provided graphs you can see the visualised results of the analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Input analysis 002.png|center|x300px|Image Caption]]&lt;br /&gt;
&amp;lt;div align=&amp;quot;center&amp;quot; style=&amp;quot;font-size:88%;line-height: 2em&amp;quot;&amp;gt;&#039;&#039;Image 9: Inspect analysis visualisations&#039;&#039;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Code available in [https://github.com/AI-REDGIO-5-0/ci-component Github]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Created by [https://www.scch.at/ Software Competence Center Hagenberg - SCCH]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Contact &#039;&#039;jorge.martinez-gil@scch.at&#039;&#039;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevant Categories==&lt;br /&gt;
[[Category:Collaborative Intelligence / Human-in-the-Loop]][[Category:Software-as-a-Service]]&lt;/div&gt;</summary>
		<author><name>Admino739mjm7</name></author>
	</entry>
</feed>