Walk through any modern city and you’ll notice something odd. Cars keep getting safer and smarter, yet crossing the road doesn’t always feel safer. Crumple zones, airbags and robust crash structures have protected people inside vehicles remarkably well. Outside the vehicle, though, pedestrians and cyclists still carry a frightening amount of risk.
That gap is exactly where simulation and AI are starting to make a difference. Instead of relying only on physical crash tests and limited real-world trials, engineers are building virtual worlds full of cars, cyclists and people on foot, then stress-testing them with thousands of situations that would be impossible – or illegal – to try on public roads.
This shift is reshaping how pedestrian safety is designed, tested and improved. Here’s how it’s happening and why it matters.
From crash labs to virtual streets
For a long time, carmakers used simulation mainly to fine-tune how a vehicle behaves in a crash. Digital models helped them decide where to reinforce the structure, how to shape the hood, or how airbags should deploy. But when it came to official safety ratings, regulators and consumer bodies insisted on physical tests only.
That mindset is starting to change. Programs like Euro NCAP are gradually accepting virtual crash results in specific areas, with protocols that define how these simulations must be built and validated. Rather than replacing physical tests overnight, virtual testing is becoming a trusted extension of them.
Why is this such a big deal for people outside the car? Because once you trust virtual tools for occupant safety, you can use the same toolbox to explore how a vehicle interacts with pedestrians and cyclists – without putting anyone in harm’s way.
Why pedestrians are still at higher risk
Vehicle occupants have benefited from decades of focused engineering effort: seatbelts, airbags, side-impact structures, active safety systems and more. Pedestrians haven’t had that level of attention or investment. In some countries, serious injuries and deaths among cyclists now outnumber those of car occupants, even as overall road safety has improved.
There are a few reasons for this imbalance:
- People walking or cycling are unprotected and highly variable in size, behavior and position.
- Many dangerous situations involve occlusions – someone stepping out from behind a parked vehicle or a truck turning across a cyclist they can’t see.
- Traditional crash tests capture only a handful of “typical” configurations, not the messy chaos of real streets.
Simulation is one of the few ways to explore this chaos in a systematic, repeatable way. But it has to be done carefully.
Bringing discipline to simulation: the V4SAFETY framework
One challenge with simulation is credibility. Different teams might use different assumptions, traffic scenarios or human behavior models, yet still present their results as definitive. That makes it hard for regulators, city planners or even other engineers to compare studies and trust the conclusions.
To tackle this, European partners created a framework known as V4SAFETY, built around the ISO 21934 standard.
The idea is simple but powerful: before running a simulation campaign you must be crystal clear about:
- What you’re trying to answer
- Are you evaluating a new emergency braking system, a redesigned intersection, or a regulation change?
- Which scenarios you’re studying
- Urban crosswalks at night? Rural roads with cyclists? Left-turn conflicts with pedestrians?
- Which models you’re using
- Vehicle dynamics, sensor models, human behavior models – all selected and documented.
- How you’ll compare with reality
- Accident databases, naturalistic driving studies, or real-world test logs.
The framework also offers templates, guidance on choosing appropriate models, and even open-source behavior models such as driver responses to forward-collision warnings. The goal isn’t to force everyone into a single simulation tool, but to make studies transparent, comparable and less prone to exaggerated claims.
For anyone working on safety – from OEMs to research labs – this kind of discipline is what turns simulation from “nice demo” into real decision support.
Making pedestrians in simulation behave like real humans
Even the best mathematical models struggle with how people actually move and decide. A scripted “pedestrian avatar” in a simulator might obediently cross at a constant speed on a perfect path. Real humans are nothing like that.
People hesitate on the curb, speed up when they think a car is going too fast, suddenly turn back to grab a dropped phone, or cross in groups. Many current simulation tools barely capture this range of behavior.
Researchers in the AI4CCAM project approached this problem in a clever way: they connected a virtual reality (VR) headset directly to the open-source CARLA driving simulator. A real person wears the headset and becomes the “pedestrian” inside the virtual city. The ego vehicle – equipped with ADAS or automated driving software – reacts to this human-controlled pedestrian in real time.
Suddenly the simulator has:
- Jaywalking and late decisions
- Changes of pace and direction
- Natural body language and hesitation
This pedestrian-in-the-loop setup generates exactly the kind of messy, unpredictable behavior that exposes weaknesses in an ADAS algorithm. On top of that, all of this interaction is recorded and can be replayed or reused as training data for AI models that predict how pedestrians move.
Future work could add VR treadmills and multiple pedestrians wearing headsets, creating entire crowds interacting with vehicles in a virtual town square.
AI that predicts how a car will hurt – or protect – a pedestrian
Active safety systems aim to prevent a crash altogether. But engineers still have to design the “last line of defense” – the shape of the hood, bumper and surrounding structures that determine how badly a pedestrian is injured if impact occurs.
Running high-fidelity crash simulations for every design tweak is slow and expensive, especially when regulations require many different pedestrian impact points and conditions.
To speed things up, companies such as General Motors and Neural Concept are testing AI models that learn from past crash simulations. These models take the 3D geometry of the vehicle front and predict key metrics like hood deformation, energy absorption or head injury criteria at many impact points.
In practice, this means:
- Engineers can explore many more design ideas early in the process.
- Only the most promising candidates need full, detailed simulations.
- Subtle interactions between components – like hinges, wiper motors or reinforcements – can still influence the prediction, because the AI “sees” the full geometry, not just a handful of parameters.
Over time, as more vehicles are simulated and tested, the model gets better. It becomes a kind of safety memory that feeds every new program.
Seeing around corners: V2X and occluded pedestrians
Some of the scariest situations involve people you simply can’t see. A child steps out from behind a parked van. A truck turns right while a cyclist is alongside, hidden in the blind spot.
Under the V4SAFETY work, researchers studied how vehicle-to-everything (V2X) communication could help in these cases. One example: a camera or smart sensor at a dangerous intersection detecting cyclists and sending warnings directly to approaching trucks.
Even if connectivity, battery charge or data coverage mean these systems only work part of the time, they can still significantly reduce risk where visibility is worst. Simulation allows engineers to estimate that impact before anyone drills holes in real sidewalks or installs roadside equipment.
Why real-world AV testing isn’t enough
For automated vehicles, the stakes are even higher. Letting a prototype loose in dense pedestrian areas just to “see what happens” is obviously unacceptable. Edge cases – like a person crossing through thick fog or darting out between cars at night – might never be encountered during limited test drives, yet they’re exactly the situations that expose weaknesses.
High-fidelity simulation platforms now combine tools like Nvidia’s sensor-accurate rendering with scenario engines and data platforms from companies such as Foretellix. Together, they generate synchronized camera, lidar and radar streams for thousands of virtual interactions between vehicles and pedestrians.
Teams can:
- Recreate rare real-world incidents using logged data
- Automatically generate variations (different speeds, lighting, clothing, traffic density)
- Measure how well the full driving stack behaves against clear safety KPIs
- Feed difficult scenarios back into training and regression testing
The result is a closed feedback loop where pedestrian safety is continuously probed, stressed and improved long before the vehicle reaches public roads.
The bigger picture: what needs to happen next
Simulation and AI will not magically solve pedestrian safety on their own. But they are changing the game in a few important ways:
- More realistic humans in the loop
- VR-based pedestrians and advanced behavior models help systems cope with real human unpredictability.
- Faster design iteration
- AI-based predictors cut down the number of full simulations needed to optimize vehicle fronts and safety systems.
- Better decisions from cleaner studies
- Frameworks like V4SAFETY make simulation results more trustworthy and comparable.
- Safer testing of edge cases
- Sensor-accurate virtual worlds let automated vehicles face the kind of dangerous situations that would be unacceptable to stage in real life.
For engineers, policymakers and city planners, the opportunity is clear: treat simulation and AI not as shiny extras, but as core tools for protecting the most vulnerable people on the road.
And for everyone who walks or cycles through busy streets each day, the hope is simple. The next time a car approaches a crosswalk, the algorithms quietly keeping you safe will already have survived thousands of virtual near-misses – so you don’t have to.

Comments
Post a Comment