Adobe’s integration of the Firefly generative AI technology into its flagship image editor, Photoshop, has the potential to be a game-changer in the industry. While Adobe may have entered the generative AI space slightly later than some of its competitors, it has been making significant strides recently. After introducing the Firefly generative AI tools in beta and integrating them with Google Bard, Adobe has now brought the technology to Photoshop.
With the addition of Firefly’s generative AI capabilities, users of Photoshop can now easily add, transform, or remove elements in images using simple text prompts. The highlight feature, Generative Fill, allows users to select a portion of an image using the lasso or other selection tools and fill it with new imagery generated based on a text prompt. The tool automatically matches the perspective, style and lighting of the original image, indeed adding details like reflections and shadows.
Importantly, the newly generated content is added in non-destructive layers, allowing users to reverse edits without impacting the original image. Alongside Generative Fill, the Photoshop beta also includes around 30 new adjustment Presets, which are filters that can be applied to images to achieve specific looks and feels. Additionally, there’s a new Remove Tool powered by Adobe Sensei AI, which enables quick elimination of unwanted objects, saving valuable time compared to manual work. The introduction of a Contextual Task Bar further enhances usability by recommending relevant next steps in various workflows, while Enhanced Gradients provide new on-canvas controls.
These new features represent a logical and expected progression in Adobe’s expansion of its generative AI tools. Google Bard’s integration with Adobe Firefly foreshadowed the integration of the same technology into Adobe’s own flagship products. Adobe has been steadily incorporating AI-driven tools into Photoshop, such as neural filters that can transform facial features.
Rufus Deuchler, Director of Worldwide Creative Cloud Evangelism at Adobe, emphasized the significance of Firefly’s integration into Photoshop, stating that it will transform the way creatives work. He praised Firefly as the only AI service that produces high-quality professional content suitable for commercial use and integration into creative workflows. Generative Fill, combined with Firefly’s capabilities, offers Photoshop users an unprecedented level of control and opens up new possibilities for image extension, object manipulation, and content creation.
Photoshop’s Generative Fill is expected to streamline the process of photo-bashing and collaging, automatically matching the context of an image and reducing the tedious work of adding details like shadows and reflections. It has the potential to save significant time in searching for specific images to include in compositions, potentially eliminating the need to search through stock photo libraries extensively.
Beyond time-saving and convenience, Generative Fill also encourages experimentation by enabling users to quickly test out unconventional ideas. The instant feedback provided by the technology allows for rapid exploration and makes users more inclined to try out creative concepts.
Adobe has taken measures to address ethical concerns surrounding generative AI by training Firefly on Adobe Stock images rather than using a broader range of internet-sourced images. Moreover, Generative Fill supports Adobe’s Content Credentials “nutrition labels,” which provide information about whether an image has been created or edited using AI, adding transparency to the creative process.
While Adobe plans to include Firefly Generative Fill as a standard feature in Photoshop later this year, it is currently available in the Photoshop beta. Users can access the beta version by opening the Creative Cloud desktop app, navigating to “Beta apps,” and installing the Photoshop (Beta) app. By embracing the beta features, users can experience the transformative power of generative AI editing firsthand.