Due to the complexity and lack of transparency of recent advances in artificial intelligence, Explainable AI (XAI) emerged as a solution to enable the development of causal image-based models. This study examines shadow detection across several fields, including computer vision and visual effects. Three-fold approaches were used to construct a diverse dataset, integrate structural causal models with shadow detection, and apply interventions simultaneously for detection and inferences. While confounding factors have only a minimal impact on cause identification, this study illustrates how shadow detection enhances understanding of both causal inference and confounding variables.

When considering any image, beyond seeing it as a container of objects, among other things, it is natural for a human being to give it meaning or to infer the explanation of some event of interest captured in it, but how can such an inference be reached through artificial intelligence? Causal inference can be applied in many areas of science and technology, such as economics, epidemiology, image processing, and autonomous driving, which are areas where it is crucial to make accurate decisions. Currently, there are widely studied methods that, through correlation, recognize and classify objects using datasets such as (Deng

This paper consists of four sections, in Section

Regarding the explainability of events or phenomena captured in an image or video, taking modelling as an essential step to achieve causal inference, Xin

By taking into account the underlying causes of shadow formation, causal inference can provide more accurate predictions and improve the overall realism of virtual environments. By virtue of this, given the relevance of this topic and the need for experimentation on specific cases that would potentially be contributory to evolving fields such as 3D graphics where shadow detection is an area where causal inference can be applied to improve accuracy and efficiency in this process, as opposed to traditional techniques such as ray tracing which is computationally expensive in terms of handling complex scenes with many objects (Levoy,

Our objective was to explain, by means of causal inference, the appearance of a shadow cast on the surface defined on the lower face of a 3D scene in which, in addition to being illuminated, a sphere is present. For this we established 3 steps; in the first one we generated

As already stated, we considered 4 observable features for the confirmation of the dataset, Table

Labellingof the observed features.

Feature | Label | Posible values |

Light | A | |

Sphere | B | |

Surface | C | |

Shadow | Y |

Figure

Observations considered in the data set.

Thus, by means of Algorithm

Data generation

According to Pearl

SCM and variables independencies.

Once the model is built, we calculate the conditional probability distributions (CPD) which are defined for a set of discrete and mutually dependent random variables to show the conditional probabilities of a single variable with respect to the others (Murphy,

As shown in Fig.

CPD for each variable of the model.

Then, to strengthen our hypothesis, we asked the model what would happen if no sphere had been detected, in other words, we intervened the model by not detecting the sphere in order to obtain the probability of detecting the shadow. To provide clarity on what role the SCM variables play in the causal inference process we follow, among others, Chiappa and Isaac (

Role of SCM variables in the Causal Inference process.

Label | Variable |

Treatment | B |

Confounders | [A, C] |

Outcome | Y |

Subsequently, considering this intervention, we calculated for the whole set of cases

Finally, to contrast the treatment results and thus obtain the estimate of how far the hypothesis was from being null, i.e. that there was no relationship between the sphere and the shadow, for a 95% confidence interval, we calculated a table of

Within the realm of shadow detection in images, prominent methods include adaptive thresholding, threshold segmentation (Bradski,

Illustrating this process, Algorithm

Algorithm for detection and causal inference of shadow phenomena

The structural causal model (SCM) was designed based on expert knowledge as Hernán and Robins (

NOTEARS causal discovery models with and without restrictions.

From the conditional probability distribution (CPD) it was possible to query the model under the hypothesis formulated. In Table

Causal inference from intervention

Outcome | Probability | ATE | Confidence interval | ||

Y(0) | 0.995 | 0.993 | 11874 | 0.00001 | 95% |

Y(1) | 0.005 |

To establish a contrast, we employed an identical image and introduced a confounding element by aligning the background colour with the shade’s hue projected onto the surface. Subsequent execution encompassed the Felzenszwalb method integrated with the causal inference module, as well as the adaptive thresholding and threshold segmentation techniques. The outcomes of this comprehensive approach are visualized in Fig.

Shadow detection result.

In the context of shadow detection, the outcomes are evident. Among the approaches, the combination of the Felzenszwalb method with causal inference (refer to Fig.

It’s important to emphasize that the presence of confounding factors significantly influenced the accuracy of the detection results. However, when considering the determination of the shadow’s causality, the impact of confounding factors became negligible. Notably, only the Felzenszwalb method (refer to Fig.

We’ve shown how to employ causal inference to strengthen a hypothesis and, as a result, deduce the cause of a shadow phenomenon with high certainty. This is accomplished by utilizing interventions and inquiries within the causal model. We start with a set of photos from a 3D scenario in which four occurrences were examined as part of a structural causal model validated with the NOTEARS algorithm for causal detection. By contrasting their performance, we also demonstrated that adding a causal inference module to a shadow detection approach is feasible and advantageous. This opens the door for similar connections in other diverse and complex ways. A causal model’s visual representation improves understanding of the problem and the roles that events play in its resolution. Despite testing the causal model with NOTEARS, there was some worry about the need to set limits based on expert knowledge. A dataset with a more intricate structure is required for causal inference when compared to typical datasets utilized for machine learning applications. Confounding factors had a considerable impact on the detection method’s accuracy but not on the causal inference model. In the future, we hope to create a second version of this project. We intend to improve causal inference in this iteration by incorporating machine learning techniques. This combined approach will determine the origin of shadows sensed in complex graphical settings.

The data presented in this study are available on request from the corresponding authors.

Jairo I. Velez: I thank the University of Zaragoza and Banco Santander (Universidad de Zaragoza-Santander Universidades) for providing me with one of the grants for Ibero-Americans in doctoral studies, thus making this research possible.