Real-World Computer Vision Applications Communications Experts Should Understand
Director of Marketing at Appen, driving responsible, inclusive AI and training data conversations with global companies.
Computer vision (CV) may sound like something straight out of the future if you aren’t familiar with AI. But computer vision has been around for some time, and companies are using it for many different applications. This means communicators should understand it, tell its stories and familiarize themselves with how companies are using it in their field.
What Is Computer Vision?
Computer vision is a subfield of AI that enables computers to pull meaningful information from images and videos. Once the CV application has interpreted the information, it can then take action on that information based on how its AI models have been trained — usually using annotated, structured data.
While computers can’t always work with images as well as humans, computer vision technology has come a long way in recent years, and innovations continue to come out regularly that leverage increases in computing power, new data availability and new ways to leverage machine-learning algorithms.
Communicators are taking advantage of these innovations and leveraging them to run better campaigns, amplify their messages further and stand out in a noisy, crowded marketplace. I wanted to help others avoid the generic advice of adopting AI and go through some of the most compelling real-world marketing and communications computer vision AI use cases to help communicators understand the landscape. What’s great about working with and interacting with any computer vision models is that you can actively participate in making them better by providing feedback on your interactions.
Computer Vision And Communications
The easiest way to understand computer vision is to dive into examples of how companies are using it in relation to the marketing and communications function. In all likelihood, you’ve used CV without even knowing it. Here are just a few examples.
Searching For Images
We’ve all been there — someone asks you about your pet and you start digging through your photo app on your phone to find the best pictures of your cat (which are sorted by the time you took them).
Instead of endlessly scrolling through your images to search for the right one, you can now search for the word “cat” in many apps and the app will return all the images of cats. (Don’t worry: That would turn out to be a lot of photos for me too.)
This capability is powered by computer vision. Next time you’re looking for a photo for a blog post or press release, you might be using algorithms to get the right results faster.
Social Media Filters
Whether you think they’re terrible or wonderful, you can also thank computer vision for filters on social media platforms such as Instagram, Snapchat and TikTok. These filters typically use computer vision to track where your face is in space and then add a filter on top.
The filters run the gamut from silly — like adding puppy ears or fairy wings — to skin smoothing and makeup-adding features. Whether you love ’em or hate ’em, computer vision is behind your ability to try on these filters.
Evaluating Messaging Through Attention Detection
One of the biggest challenges we have as communicators is getting people’s attention in a very crowded and noisy environment. Today, we are bombarded by messaging through phones, laptops, TVs, billboards, radio and online streaming services, so we work hard to ensure our messaging stands out. A little bit of testing can help us save money on ad budgets, and an engaging ad can be the difference between a successful campaign or a flop.
Several companies have developed ad effectiveness testing platforms to address this challenge; they use computer vision to measure audience engagement. Companies like Realeyes leverage attention measurement to help teams evaluate their ad’s impact before it even hits the designated medium.
AI-Powered Image Editing
Not all use cases have to be complex. They can be as simple as using computer vision to detect objects in an image and changing or separating them from the background. Image editing tools could come in handy when marketers want to iterate on a key visual for social media and their designer is gone for the day. For example, a company conveniently called Removal.AI leverages CV technology to function as a background remover.
Computer vision technology can evaluate millions of existing images and videos and decide where to place relevant ads in a fraction of the time a manual process would. Some companies are also using computer vision to help ensure brand safety and prevent clients’ brands from appearing next to undesirable content.
For example, GumGum is a company that conducts content analysis to help advertisers and publishers better target ads in context. Companies like Zefr are working to help brands steer clear of videos that might contain content that they don’t want to be associated with. (Full disclosure: GumGum and Zefr are Appen customers.) And StackAdapt is a programmatic advertising platform that identifies “the highest-value opportunities to reach customers” before a campaign launches.
There are many more computer vision tools out there that are targeted toward communicators.
Helping Build The Future Of Computer Vision
It’s taken a long time to get to where we are today with computer vision technology. Each breakthrough gets us just a little bit closer to a place where computers can “see” the world around them just like humans do.
The applications we’re using computer vision for today range from convenience and beauty to efficiency and fun and games — but some researchers are also using it to detect life-threatening diseases like cancer. As this technology gets better and better, I think we’re going to find more ways to use it to make our lives safer and better.
And remember, as communicators, we can also help improve the machine learning models that power innovations like these by providing feedback for each of our interactions when requested. Often, companies will use this feedback to add more data to improve their models and deliver better results for us in future interactions. If we all contribute, we could get there faster and with more diverse views.