Benchmarks Show Google’s Banana Pro Outperforming OpenAI’s New GPT Image 1.5

OpenAI has released GPT Image 1.5, a distinct update to its visual generation capabilities, amidst an increasingly competitive landscape dominated by Google’s Banana Pro. While the release marks a progression from previous iterations, early performance benchmarks reviewed by toolmesh.ai suggest the model struggles to reclaim the technical lead held by its primary competitor, particularly in areas of text rendering and photorealism.
Interface Updates and Processing Speed
The update introduces a redesigned interface within ChatGPT, featuring a distinct background palette and preset style shortcuts such as "Sugar Cookie" and "Plush Toy." The system now includes dedicated workflows for professional applications, including product photography and headshot generation.
While the integration of style presets aims to streamline the user experience, the interaction design for uploading reference images involves multiple pop-up windows, creating a fragmented workflow. However, technical performance has improved; generation latency has been reduced, with render times now averaging between 40 seconds and one minute.
Text Rendering and Information Accuracy
Text adherence remains a critical differentiator in multimodal AI systems. In comparative tests involving complex formatting—such as generating a standard calendar for February 2026—Google’s Banana Pro executed the numerical sequence and grid alignment precisely. In contrast, GPT Image 1.5 failed to terminate the sequence correctly, hallucinating dates beyond the 28th.
Similar disparities appeared in multilingual tasks. When tasked with rendering Chinese poetry in a calligraphy style, OpenAI’s model produced illegible characters. While Banana Pro managed coherent text generation, it struggled with layout specifics, such as the placement of Pinyin annotations. In tests simulating user interface design, such as an Instagram feed, Google’s model demonstrated superior structural understanding, whereas OpenAI’s iteration failed to replicate standard UI elements accurately.
Photorealism and Texture Quality
When tasked with generating photorealistic portraits and atmospheric scenes, distinct stylistic differences emerged between the two models. GPT Image 1.5 exhibited a tendency toward high saturation and contrast, resulting in a glossy, artificial texture often associated with synthetic media.
Conversely, Banana Pro demonstrated superior handling of natural lighting, skin textures, and environmental details. In complex lighting scenarios, such as a backstage dressing room or a dimly lit bar, Google’s model produced outputs that more closely resembled optical photography, avoiding the "over-processed" aesthetic observed in the GPT Image 1.5 results.
Editing Precision and Consistency
The models were stress-tested on semantic editing capabilities, such as replacing subjects within a scene or altering weather conditions while retaining original elements. Banana Pro maintained accurate perspective and lighting integration when swapping subjects, adhering to the laws of physics regarding depth and foreground-background relationships. OpenAI’s iteration struggled with these spatial relationships, often flattening the image depth.
However, GPT Image 1.5 outperformed its rival in specific compositional prompts. In tests requiring a top-down camera angle of a group reflection in a mirror, OpenAI’s model successfully achieved the composition, whereas Google’s model failed to adhere to the specific camera positioning instructions.
World Knowledge and Logical Reasoning
Assessments of "world knowledge" yielded split results. In pop-culture rendering tasks, such as ranking characters from the anime One Piece, GPT Image 1.5 delivered stylistically superior visuals but suffered from factual hallucinations regarding specific character identities. Banana Pro maintained factual accuracy in these tests.
In logic-heavy prompts—such as rendering a specific time on a clock alongside anatomical anomalies—GPT Image 1.5 correctly visualized the time but failed the finger count, whereas Banana Pro failed to render either element accurately. Overall, while OpenAI has made incremental improvements, the rapid development cycle of Google’s Banana series appears to maintain a technical edge in consistency and fidelity.