The Scope
UC3 will focus on bridging the distances between the urban and industrial spaces. There are several activities in progress, such as (a) for people mobility, pilots on multimodal mobility, technology acceptance for autonomous vehicles, the development and deployment of mobility-as-a-service concepts, and (b) for freight mobility, bunding of goods, using different modes of transport and drive technologies to reduce the CO2 footprint of logistics, and vehicle routing and route optimisation calculations, especially to improve the situation in urban spaces. The different activities have in common that they are in pilot phases, do not link different organisations, and hardly ever build on a set of data to generate and derive information and knowledge. To overcome this and to increase the efficiency and environmental sustainability of people and freight mobility, UC3 emphasises the possibility of building new solutions on sensors, data, and new hard- and software. With this, new knowledge can be generated, and people and freight mobility can be improved, leading to increased liveability in urban areas and higher environmental and economic sustainability.
The Objectives
To transform people and freight mobility to higher levels of environmental and economic sustainability, the transformation to new drive technologies and, if necessary, temperature-control technologies in logistics, is needed. The availability of energy is one of the key challenges since many grids are not strong enough to load the vehicles, nor is enough loading infrastructure for people, and even more challenging, freight transport available. Since buildings and facilities in urban and industrial spaces can generate and store energy, UC3 builds on the solutions of UC1 and 2 in energy provision to lead the electrified vehicles based on the energy demand for the necessary routes to drive.
Description: For freight mobility (i.e., logistics), collaborative route optimisation with the goal of minimising energy demand and the number of needed vehicles will be focused. The developed solution holds no matter which vehicles are in use related to driving technologies and with driver or driverless. Real-time data from streets and IoT data from personal devices serve as a foundation for dynamic vehicle routing, route optimisation, and fleet decomposition algorithms to reduce congestion and increase efficiency in logistics. For people mobility, real-time data will also be used to develop on-demand solutions in scheduling and fleet decomposition to efficiently move people between urban and industrial spaces at the highest levels of comfort. Furthermore, new services based on available data for in-room safety and atmosphere to increase the comfort of commuting in collective transport vehicles will be developed. The design and implementation of a “smart vehicle” (a bus provided by SETA and a car provided by CSIC), which relies on the integration of multi-sensor IoT systems with distributed embedded intelligence, through the integration of innovative internal and external sensing solutions to derive a representative characterization of the “status” of the vehicle will be emphasised. In detail, the use of multiple sensors to collect environmental and road safety information is envisioned, with a step forward concerning existing approaches being represented by the combination of all (heterogeneous) collected data. The communication architecture defined in UC3 will rely on next-generation trustworthy open IoT architectures, able to support multiple internal (e.g., WiFi, Bluetooth, Ethernet) and external (e.g., cellular 4/5G, LoRaWAN) communication protocols for information exchange. The vehicle status will be then correlated with the status of the smart city along which the vehicle is moving, providing significant insights into the characterisation and optimisation of urban mobility patterns. Moreover, the results supported by UC3 will also be instrumental to the 34 design of innovative multi-modal people and freight mobility systems. All this supports the increase of liveability in urban spaces, and reduction of the CO2 footprint of freight transport and the increase of comfort in people’s mobility services:
The Country & Area
Modena Automotive Smart Area, Modena/IT, CSIC proving grounds in Madrid/ES The Smart Bus Scenario will be carried out in Modena, Italy, both in a specific city area dedicated to smart mobility (Modena Automotive Smart Area, MASA) and in the entire city (across a bus line of SETA). The Smart car Scenario will be carried out in Madrid, Spain, in the proving grounds of CSIC. The smart and urban logistics area will be conducted in the area of Zurich. The ondemand transportation scenarios will be conducted in the area of Frauenfeld, Switzerland
The Demonstrations
- Demonstration 3.1: Vehicle route, green route optimisation, and fleet decomposition algorithms based on types of goods and recipient location
- Demonstration 3.2: Algorithm for fleet optimisation of on-demand transport services in commuter traffic
- Demonstration 3.3: Strengthened Situational Awareness for automated vehicles aligned with Cities Mission
- Demonstration 3.4: Bus air quality management for healthier passenger transportation
- Demonstration 3.5: Privacy-preserving Video-enhanced people monitoring for safer transport
Demonstration #3.1 Vehicle route, green route optimisation, and fleet decomposition algorithms based on types of goods and recipient location Urban areas suffer from high numbers of logistics vehicles for numerous products. (a) Fleet decomposition optimisation (KI4.1) is reached when taking the information from UC2 about the materials, goods, and products and the destination of the respective goods/products into account. As higher the level of cooperative shipment from industrial to urban spaces, as higher the optimisation potential in the reduction of logistics vehicles used and ton-kilometres for the whole shipment. (b) Route optimisation(KI3.4) can be reached when freight with similar handling requirements and similar destinations is bundled at the source (i.e., industrial spaces) and allocated to vehicles that allow delivery according to the requirements of the shipped goods. Real-time data and data from the vehicles and the goods will be merged to optimise the routes dynamically. This means that the route is not only calculated at the beginning of the tour but also during the tour, taking the latest street condition information, the behaviour of the driver (if the vehicle is not autonomous) and the subsequent energy use, and sensor data that track energy use of main and ancillary aggregates, into account. (c) Last-mile delivery modes (KI2.6) and corresponding decoupling hubs to cross-doc the freight from logistics vehicles between industrial and urban spaces and last distribution in urban areas are developed based on route optimisation algorithms and vehicle decompositions for the last miles. (d) The results are linked back to UC1 and UC2 for the question of energy demand and supply in facilities and buildings. For both, the logistics routes between industrial and urban spaces, and within urban spaces, the algorithms are developed for all different kinds of vehicles, including autonomous vehicles, lorries, cargo-bikes, etc. so that future application in different spaces is possible without high adaptation efforts. This demonstration will also tackle human-in-the-loop decision-making (KI2.3), utilisation of trustworthy AI (KI2.5) improved AI immunity (KI4.2 ) and decentralised accountability (KI4.1), integral to the fleet management purposes.
Demonstration #3.2 Algorithm for fleet optimisation of on-demand transport services in commuter traffic To reduce private mobility vehicles in urban spaces and between urban and industrial spaces, an algorithm for fleet and route optimisation for on-demand transport services will be developed. The algorithm uses IoT, AI and real-time data to produce transport demand from different neighbourhoods within an urban space and to an industrial space. This leads to the prediction of the size of needed transport vehicles and drivers (as long as the vehicles are not autonomous) concerning time based on the source and destination of commuters. This will reduce the traffic for private people without compromising the commuter’s comfort. Complementary to Demonstration #3.1, the human-in-the-loop decision-making (KI2.3), utilisation of trustworthy AI (KI2.5), fleet optimisation by considering last mile delivery (KI2.6), route optimisation (KI3.4), improved AI immunity (KI4.2 ) and decentralised accountability (KI4.1) will holistically be addressed.
Demonstration #3.3 Strengthened Situational Awareness for automated vehicles aligned with Cities Mission Situational awareness (SA) system for automated vehicles will be implemented including sensors (i.e., 2D cameras) as well as perception algorithms (i.e., object detection modules like pedestrian, signals and vehicle detection) (KI3.3) and decision-making (KI2.3) for autonomous navigation algorithms RISC-V architectures using chips to be designed under NexTArc (KI 2.2 SA autonomous navigations chips optimized for chips). Additionally, the vehicle status will be correlated with the smart city in order to generate value added applications in the field of Smart Cities, like mobility monitoring and planning, traffic management possible new secure services; and optimal fleet management with the assessment of the risk factors experienced by drivers due to fatigue, distractions and dangerous behaviours. (KI2.1 & KI2.2 External Perception for outof-vehicle SA). A camera fusion module will be designed to perform data fusion at the bus level, thus producing a real-time map of bus occupation. (KI2.7: Data augmentation). The goal is to provide information about real-time bus occupation and online management of bus areas and passengers’ anomalous occupation patterns. Based on the proposed architecture, the demonstrator will focus on privacy concerns: no images will be transferred from the cameras to the data fusion processing unit following the paradigms of “always-on” AI (KI2.3), in which model fine-tuning and arbitration can be adjusted on the fly exploiting a continuous learning paradigm (KI2.7). Resilience assessment of safety concerns will be tackled by using KI4.2 . This demonstration aims also to provide actionable and georeferenced information about the flow of tourists, commuters and commercial freight. GDPR-compliant data acquisition using electro-optical sensors are used at hotspots of the cities such as intersections, large squares and the main entrance and exit ports of the city. AI technology, exploited on edge devices with low power, will analyse the visual data to extract all transportation 35 objects including pedestrians, cyclists, and different types of vehicles, and measure their speed, heading, inter distances, queue lengths, etc. To realise this, we need an efficient design methodology to extend the capability of deployed AI models on the fly by exploiting a library of learnable modules, which is addressed by KI2.7. The citywide real-time data is integrated with the digital twin system and visualized in a separate layer and is addressed with KI3.2. The data is available to inform tourists about crowding and to adjust traffic control accordingly, for personnel planning in shops, police forces, and garbage collection.
Demonstration #3.4 Bus air quality management for healthier passenger transportation Bus air quality management will be implemented by integrating environmental air quality IoT intelligent nodes and an automotive-grade edge server specifically conceived to support edge AI and distributed intelligence (KI3.3: Multi-Interface Gateway with Embedded Intelligence and KI2.5 Trustworthy Embedded AI, and KI4.2 AI immunity ). The goal is to enable the bus to autonomously predict, based on the data collected by the smart sensors, the air quality status, correlate the information with the path usually travelled and proactively take measures.
Demonstration #3.5 Privacy-preserving Video-enhanced people monitoring for safer transport Bus video passenger monitoring and pedestrian flows in urban spaces will be implemented by exploiting camera systems in which people location and attributes will be analysed by deploying privacy-preserving deep learning solution and by maximizing local (edge) processing on the camera side. To enable efficient and modular edge computing, a modular library of components is proposed, enabling easy integration when extending the functionality of deployed models (KI2.7: Library of configurable parameter efficient modules with online continual learning capabilities for people detection). Such a library grants access to lightweight, task-specific modules that can be combined without requiring extensive computational resources or labelled data. Unlike the conventional approach of fine-tuning monolithic models, this methodology enables the adaptation of a neural network to downstream tasks while also maintaining computational efficiency by assembling these modules rather than initiating a new training process from scratch. This demonstrator will incorporate KI2.3 for safer cruise strengthened with human-in-the-loop decision making, KI2.4 for integration with smart mobility practices, KI3.3 for better connectivity of the systems and high-throughput data transfer, and KI4.2 for safety aware AI resilience assessment.




