Google’s generative AI model, Gemini, has now gained camera and screen recognition capabilities. However, this feature is exclusive to paid subscribers only.
The US-based tech giant has made a significant move with its Gemini AI model, allowing it to perceive the world visually—just like humans do. But how exactly is this possible?
According to a statement from Google spokesperson Alex Joseph, Gemini now supports camera and screen interaction through the Live feature. This means that while using Gemini Live, users can activate their phone’s camera, show their surroundings to the AI, and receive assistance on virtually any topic.
Available Only to Google One AI Premium Subscribers

Google clarified that the ability of Gemini to process visual input through the camera and screen is currently restricted to Google One AI Premium subscribers.
At the moment, this feature appears to be in the gradual rollout phase. Posts on Reddit indicate that while some users can already access it, others have yet to receive the update. This suggests that many users will need to wait a little longer before unlocking Gemini’s visual capabilities.
A New Dimension for Generative AI

By integrating real-world visual perception, Gemini takes a significant step forward in how generative AI interacts with users. This update allows for context-aware support, enabling users to show the AI their environment and ask questions related to what it sees—such as identifying objects, reading signs, or understanding screen content.
You May Also Like
Follow us on TWITTER (X) and be instantly informed about the latest developments…
Copy URL
Follow Us