From the standard to reality: how to set up a Data Space step by step

AI Open Space

From the standard to reality: how to set up a Data Space step by step

Last week, we saw how the publication in summer 2025 of Specification UNE 0087 has marked a milestone in the development of Data Spaces in Spain. For the first time, there is a common framework that defines what they are, how they are characterized, and which elements these data-sharing environments must include. However, between the standard and reality lies a path that requires strategic, technical, and organizational decisions.

Launching a Data Space is not just about complying with a specification: it is about designing an ecosystem of trust where data generates value without losing control. This article aims to walk that path, from the initial definition to full operation.

Every Data Space begins with a key question: what does it exist for? Before discussing technology, connectors, or standards, the promoter must define the objective of the space—whether it is to address a specific challenge, activate a sector, improve public services, or foster territorial innovation. UNE 0087 provides the conceptual framework, but it is the purpose that determines the scope, the participants, and the operating model. A clear example is the Centre of Excellence in Intelligent Data Spaces in Zamora, whose purpose goes beyond mere technological infrastructure to turn the city into a hub for research, training, and entrepreneurship in the data economy.

This initial definition determines whether the space will be sector-specific or cross-sectoral, whether it will operate at a local, regional, or national level, and what type of value it is expected to generate. A Data Space focused on urban mobility is not structured in the same way as one centred on the circular economy or digital health. Purpose not only justifies the investment, but also guides all subsequent decisions. In Zamora’s case, the objective combines research into enabling technologies, specialised training, and business acceleration, simultaneously addressing regional economic revitalisation and the National Artificial Intelligence Strategy. 

Identifying the actors and their roles

A Data Space is not a platform, but a network of organisations. At this stage, the main roles are identified: who promotes the space, who provides data, who consumes it, and under what conditions. Defining these relationships from the outset helps avoid future conflicts and strengthens trust among participants.

The promoter may be a public administration, a sectoral consortium, or a private entity capable of orchestrating an ecosystem. Data providers may include companies, public bodies, IoT sensors, or digital platforms. Data consumers can range from researchers to service developers or public managers. Governance begins to take shape here, even if it has not yet been formalised, by establishing who has a voice in decision-making, who assumes responsibilities, and how benefits are distributed.

Designing the rules of the game

One of the most critical steps is establishing the rules for data use: which data is shared, for what purpose, for how long, and under what conditions. The standard defines principles, but their practical application requires clear, understandable, and acceptable agreements for all parties.

This is a key point for overcoming one of the main barriers to data sharing: the fear of losing control. The rules must address issues such as data ownership, usage rights, access restrictions, retention periods, and conditions for third-party sharing. They should also cover scenarios involving modification, revocation, or termination of agreements. Transparency in these rules provides legal certainty and facilitates the onboarding of new participants. An effective approach integrates the FAIR principles (Findable, Accessible, Interoperable, Reusable) from the design stage, ensuring that data is discoverable, accessible, interoperable, and reusable through clear taxonomies and well-documented usage restrictions.

Translating governance into technical mechanisms

Trust is not declared—it is implemented. The agreed rules must be translated into technical mechanisms that guarantee access control, traceability, security, and interoperability. This is where connectors, digital identities, and European standards come into play. Research into decentralised architectures enables solutions such as federated learning, where multiple entities collaborate in developing artificial intelligence models while preserving data privacy. Distributed ledger technologies add an additional layer of trust through immutable traceability of data transactions.

Technology does not lead the process, but it makes it possible. Connectors act as exchange points that verify identities, enforce usage policies, and record transactions. Semantic interoperability ensures that data can be understood across different systems. A decentralised architecture allows each organisation to retain control over its data while participating in the ecosystem. These technical elements are not optional: they are the operational expression of governance agreements.

Ensuring regulatory compliance by design

One of the distinguishing values of Data Spaces is that they are born aligned with European regulation. GDPR, the Data Act, and the AI Act are not managed as afterthoughts, but as integral parts of the design. This approach reduces risks, generates legal certainty, and facilitates adoption by companies and public administrations.

Regulatory compliance involves implementing privacy-by-design mechanisms, ensuring data portability, establishing informed consent procedures, and providing for usage audits. It also requires clarity regarding responsibilities in the event of non-compliance and accessible complaint mechanisms for data subjects. Regulation is not an obstacle, but a framework of guarantees that strengthens the credibility of the space.

Activating the first use cases

A Data Space is not validated by its architecture, but by its real-world use. Deployment typically begins with pilot use cases—controlled, focused, and with clear impact. These initial scenarios allow rules, technology, and governance to be fine-tuned before scaling the ecosystem.

This is also the point at which the value of data becomes tangible. A well-designed use case demonstrates that data sharing delivers concrete benefits: improving operational efficiency, enabling new services, optimising public resources, or driving innovation. Pilots help identify technical frictions, regulatory gaps, or cultural resistance that were not evident during the design phase. The learning generated at this stage is essential to ensure scalability.

Evolution and continuous improvement

A Data Space is not a closed project. It is a living infrastructure that evolves with new participants, new data, and new uses. The standard defines a starting point, but the sustainability of the space depends on its ability to adapt.

Governance must include mechanisms for periodic review, onboarding of new actors, technological updates, and impact assessment. It should also anticipate interoperability with other Data Spaces, both sectoral and territorial, facilitating the emergence of an integrated European ecosystem. Continuous improvement is not only technical: it also involves the evolution of data culture, participant training, and the strengthening of collective trust.

UNE 0087 lays the foundations for building aligned, interoperable, and trustworthy Data Spaces. But turning that foundation into an operational reality requires vision, coordination, and a progressive approach. From the standard to reality, there is no leap—only a path to be walked step by step, combining regulatory frameworks, governance, and technology in the service of data-driven value.