# Welcome!
We give some context: - Ray Tracing (RT) is a powerful technique for simulating electromagnetic (EM) fields in complex environments. - It is computationally expensive, especially for large scenes. - Ray Launching (RL) is a faster alternative, but it sacrifices accuracy, or \* requires a lot of rays to be launched.
We present the main issue behind (exhaustive) RT: we try many rays, but only a few are valid.
Let's review the Ray Tracing Pipeline. A scene is made of...
To trace all rays from TX to RX, we usually first generate a set of path candidates.
That set of path candidates is usually made of all possible paths for a given order N.
For example, for first order, we try reflection on each possible object.
For second order, we try reflection on each possible object, followed by reflection on each other object.
For third order, we try reflection on each possible object, followed by reflection on each other object, followed by reflection on each other object...
For each path candidate, we trace the corresponding ray.
We then check if the ray is valid, i.e. if it intersects with the RX and does not intersect with any object in the scene.
We then select the valid rays, and compute the EM fields at the RX.
This is the Ray Tracing pipeline, but it is computationally expensive, especially for large scenes. Going back to the path candidates, we can see that the number of paths grows exponentially with the order N, and that most of them are invalid. This is the challenge we want to address with our model.
Our idea is to use a generative model to select the most promising path candidates, i.e. the ones that are more likely to lead to valid rays. This way, we can reduce the number of rays we need to trace, and thus speed up the RT pipeline.
Our model learns to generate path candidates with the highest probability of leading to valid rays.
Now, our path candidates steps only generates a limited number of path candidates, which are the most promising ones according to the generative model.
1st order.
2nd order.
3rd order.
Our model is a reinforcement learning model, which means it learns to generate path candidates by maximizing a reward function. The reward function is based on the validity of the rays traced from the path candidates, and it is designed to encourage the model to generate path candidates that lead to valid rays.
We define two metrics
Let's see training results on 1st and 2nd order reflection in a street canyon. For 1st order, we see that the accuracy is low, while the hit rate is good. This means that our model is able to generate some valid rays, but it will generate a diverse set of valid rays, which is what we want. For 2nd order, the hit rate only reaches 30%, which is not yet sufficient to replace exhaustive RT with our model. In both cases, the accuracy is far better than the one of a random model, which would generate valid rays with a probability of 3% for 1st and 0.03% for second order reflection. Ongoing research has already shown that the model can be improved, reaching above 80% of hit rate for both cases.
How does it translate to actual radio propagation?
Let's wrap up
Summary point 1
Summary point 2
Summary point 3
Summary point 4
Future work
Future work point 1
Future work point 2
Future work point 3
Future work point 4
Links to code and tutorial.