One interesting thing to do is label all keywords with Featured Snippets/'People also ask' to see what happens to the traffic over time as these change. But I wouldn't modify the positions just because the layout has changed, to allow a continued reporting. Also, it's not a bad idea to consider a 'true position' including ads on top of the legacy tracking.
I also like the idea of Koray Tuğberk to calculate the position according to the Pixels. If the first organic web search result's position is below 450 pixels below, you can spend my budget and time for another query group gladly. There is a tool that calculates pixels from the top today - nozzle.io.
And here's the unique approach applied by Boyd Norwood: based on overall SERP features, they first allocate CTR to 4 top-level sections you can see in the top left of the screenshot: paid, no-clicks, organic column 1 and 2, based on recent research and clickstream data.
Then inside each of those sections, we start with a base CTR curve, which is again research-driven and different for devices, branded searches, etc. Next, each pack receives a boost or reduction in CTR, depending on pixel coverage and specific SERP features included. That is normalized to give a CTR allocation per pack.
Then inside of each pack itself, those clicks are allocated to each of the clickable items. The base item level CTR is driven by pixel coverage, with modifiers applied based on specific features inside the item, similar to how the pack modifiers work. Again, those numbers are normalized and put through an exponential decay curve that differs based on the pack layout. With all of the data available to us, I believe we have the most accurate CTR calculation ever made, which flows through to all other metrics.