Photoshop for text

· 2 minute read

When I think about editing images, a vast array of options come to mind: contrast, saturation, sharpen, blur, airbrush, clone stamp, etc. Even basic image editors offer dozens of useful image manipulation tools.

When I think about editing text, a much narrower definition comes to mind: cut, copy, paste, find, replace, spell check — nothing that modifies the totality of the writing. This is changing.

In the near future, transforming text will become as commonplace as filtering images. A new set of tools is emerging, like Photoshop for text.

Up until now, text editors have been focused on input. The next evolution of text editors will make it easy to alter, summarize and lengthen text. You’ll be able to do this for entire documents, not just individual sentences or paragraphs. The filters will be instantaneous and as good as if you wrote the text yourself. You will also be able to do this with local files, on your device, without relying on remote servers.

Today there are useful tools that build on spell-checkers to help you improve clarity, grammar, tone — but these are rudimentary compared to the new capabilities that are being developed. Text filters will allow you to paraphrase text, so that you can switch easily between styles of prose: literary, technical, journalistic, legal, and more. You will be able to change an entire story chapter from first person to third person narration, or transform narrative descriptions into dialogue.

When Photoshop was created in the 1980s, it made image manipulation easy and reversible. Initially, many of Photoshop’s capabilities were adaptations of analog effects. For example, “dodge” and “burn” are old darkroom techniques used to alter photographs. There are countless skeuomorphic names throughout digital image editing tools that refer to analog processes.

In some ways it is surprising that filtering text is so technically challenging. Text seems like it would be easier to manipulate than images. But languages have far more rules than images do. A reader expects writing to follow proper spelling and grammar, a consistent tone, and a logical sequence of sentences. Until now, solving this problem required building complex rule-based algorithms. Now we can solve this problem with AI models that can teach themselves to create readable text in any language.

These new tools will not only be able to transform text, but also accurately summarize text, and even expand text with more granular detail, in surprising and creative ways.

In a “A camera for ideas”, I coined the term synthography to describe synthetic images created with generative models. Similarly, increasing amounts of text will be synthscribed, as in described, transcribed, inscribed — synthetically.

These capabilities are all possible today, but will take time to refine. To make the experience as seamless as image manipulation, language models need to be local to the device so that they be private, offline and future-proof. I’m excited to see more efforts driving in this direction.

While some of these capabilities sound a bit scary at first, they will eventually become as mundane as “desaturate”, “Gaussian blur” or any other image filter, and unlock new creative potential.