Yes, of course. Once you have the winner , irrespective of how long it takes for the algorithm to determine that depending on the traffic volume and activity, you should start seeing the lift bounds and confidence values.
Yes, auto-allocation helps determining the winner faster than a manual AB-test but if the test is stopped before reaching the conclusion i.e. before a winner is declared, one just needs to make the best guess based on the metric values, e.g. conversation rate, by himself. It is neither right, nor very useful, to claim a result when there is still a lot of similarity across the experiences as per the underlying algorithm and the test could go in either direction if run for more time. Hope that helps.
For any activity, lift bounds start showing up for a given experience when it has reached the state of statistical significance in terms of confidence with respect to control and the confidence interval value.
What differs auto-allocation based activities is that there is no notion of control experience and the statistical significance of each experience is bound to that of the activity itself, which is attained when the true winner of the activity has been determined by the algorithm. Once that happens, lift bounds along with the other statistics - confidence and confidence interval , show up in the report.
Please refer Auto-Allocate documentation for more information around the same.