Based on my limited contact with AI during the aughts' semantic web/description logic symbolic heyday (we were exploring ways in which multiple communicating knowledge bases might resolve conflicting information): symbolics with uncertainty is too hard, maybe in a very far future. When ML and symbolics are successfully put together, I expect the symbolics to focus on what they do best: ignore uncertainty and change, leave all that to the ML part. For example, when you do the "obvious thing" and run symbolic reasoning on top of classifications supplied by ML (which is maybe a naive approach not working out at all, I have no idea), you would feed back corrective training updates into the classification layer instead of softening the concepts when the outcome of reasoning is not satisfactory.
Imaginary toy example: if your rules state that cars always stop at stop signs, but the observed reality is that this hardly ever happens, this first iteration of ML-fed symbolics would not adopt by adjusting the rules to "cars carefully approach stop signs, but don't actually stop", it would eventually adopt by classifying the red octagonal shape as a yield sign, keeping the rules for stop signs as is (but never seeing any).