Hello,
We finished this step
2D image generated by VLM in visionOS
As you can see, the result is based on another image dataset.
This is a result
![momojo_2-1720733252713.png momojo_2-1720733252713.png](https://experienceleaguecommunities.adobe.com/t5/image/serverpage/image-id/76844i0CE2644D5111A37E/image-size/medium?v=v2&px=400)
and we need a lot of cute 3d character image without background
![momojo_0-1720733184647.jpeg momojo_0-1720733184647.jpeg](https://experienceleaguecommunities.adobe.com/t5/image/serverpage/image-id/76842iFFBB8EC3EFD9E962/image-size/medium?v=v2&px=400)
![momojo_1-1720733195590.jpeg momojo_1-1720733195590.jpeg](https://experienceleaguecommunities.adobe.com/t5/image/serverpage/image-id/76843i7F18E99A5F63CEFD/image-size/medium?v=v2&px=400)
I would like to use Adobe stocks such as
https://stock.adobe.com/kr/images/3d-monster-cartoon-character-fun-toy/637252498
as dataset to train diffusion model
Let me know what should I do.
Flus, why there is no Adobe AI conference?