robotLLM Conditioning

As last step, we used the ontology as a conditioning framework for a Large Language Model. By providing the LLM with selected case studies and constraining it to our ontology structure, we asked it to generate knowledge graphs compliant with MDO. This experiment tested the robustness of our ontology as a modeling framework and its applicability in AI-assisted knowledge extraction.


1

Prompt

TASK: Rigorous RDF/Turtle mapping of a narrative onto a provided ontology.

  • Read the provided .ttl ontology file and associate every single piece of data extracted from the story to a specific class or property.

  • Use ONLY the classes, object properties, and data properties defined in the provided ontology. Do not invent new terms.

  • Use correct RDF/Turtle syntax.

  • Use the prefix mdo: for the Monument Debate Ontology.

2

Our Ontology

file-download
567KB
3

Case Study

It is a bright spring Saturday in Milan. In Piazza della Repubblica, the statue of Indro Montanelli sits in the public gardens that bear his name. Sculpted in bronze by Vito Tongiani, the monument shows Montanelli seated at his Olivetti typewriter. It was commissioned by the Municipality of Milan and inaugurated in 2006, meant as a tribute to one of the most influential Italian journalists of the twentieth century. On the pedestal, the engraved inscription reads: “Indro Montanelli, Journalist.” But today the statue is not simply a monument. It is the center of a long, painful debate.

The 2020 protest: The crowd gathered closest to the police cordon is young, loud, and determined. Members of the student organisation Rete Studenti Milano stand at the front, holding signs splashed with red paint—the same red they threw on the statue just days earlier, where they also sprayed at the base the words “Racist, Rapist” in black.

For the students, the statue represents a celebration without context, a public honor that ignores Montanelli’s statements on colonialism and his marriage to a twelve‑year‑old Eritrean girl. They argue that Milan cannot continue to commemorate a figure without acknowledging the harm tied to his actions.

Across the cordon stands a journalist, visibly shaken by the scene. He speaks to a small group of supporters, explaining that Montanelli’s legacy in journalism is immense, that he shaped Italian reporting for decades. Removing the statue, he says, would not correct history—it would erase it. He believes the monument should remain, perhaps contextualized, but not torn down.

A moment of confrontation: As the chants soften for a moment, the journalist and one of the student representatives find themselves unexpectedly close, separated only by a thin line of police.

“Public spaces should not glorify someone who caused harm,” the student says. “This statue stands here without any explanation. People walk by and see only a celebrated journalist.”

The journalist nods. “I don’t deny the truth. But removing him from sight won’t make the past disappear. Montanelli shaped Italian journalism. His contradictions should be explained, not erased.”

So add context,” the student replies. “Add a plaque. Add a panel. Tell the whole story. But don’t pretend this statue is neutral.”

The statue stands unchanged, but everything around it has shifted. For some, it embodies racism, colonial violence, and the normalization of a painful past. For others, it represents journalistic legacy, freedom of expression, and the need to preserve history even when it is uncomfortable. And so Montanelli remains there—silent, immobile, suspended between two narratives that refuse to meet.

4

RDF/TTL generation

5

Knowledge Graph

LLM Performance Analysis

The LLM demonstrated a consistent ability to identify and map the core classes and properties of the provided ontology. However, the mapping was not exhaustive, and several interventions were required to achieve a complete graph.

Key Findings:

  • Connectivity Issues in Complex Classes: The most complex parts of the model, specifically the ceon-actor:Stakeholder and mdo:Argument classes, were initially generated with incomplete properties. Multiple prompts and specific solicitations were necessary to force the LLM to create all the required logical links. Without these interventions, the model failed to autonomously respect the full relational structure of the ontology.

  • Data Fragmentation and Misclassification: Sometimes the LLM struggled with specific classes. A notable example is the handling of dcterms:date. Instead of maintaining it as distinct entity, the LLM frequently merged it into other instances—such as folding the year into the label of the crm:E12_Production class (e.g., production_2006).

  • Omission of Temporal Context and Discussion: The model failed to explicitly instantiate the deo:Discussion class and the tip:TimeInterval class to represent the duration of the protest, even if they were mentioned in the text.

  • Consistent Failure with Contextualization and Setting: Across multiple tests, the LLM failed to utilize mdo:isContextualisedBy. This property links the monument to physical contextualization elements (plaques/panels). Instead, the model misattributed these to crm:E26_Physical_Feature, which should describe the statue's original state. Furthermore, the mdo:debateSetting class, defining the environment of the confrontation (the police cordon), was consistently omitted.

Conclusion:

While the LLM is capable of a "macro" mapping, it tends to simplify or ignore the most granular and relational aspects of the ontology. Substantial human oversight is required to ensure that complex classes are not just present, but correctly interconnected according to the schema's properties.

Last updated