YouTube has published a new review of how the content recommendation system works, which is one of the key drivers for video reach and views on the platform, and can help YouTube marketers gain a better understanding of what leads to the optimal response.
YouTube actually has a similar overview earlier this year, as part of his ongoing effort to maximize transparency, with this new clarifier a little more historical insight into how his systems evolved and how they work to improve his processes.
As explained by Youtube:
“Our recommendation system is built on the simple principle of helping people find the videos they want to watch, and it will give them value. ”
‘Value’ is of course a pretty vague term in terms of social media metrics, and especially in measurements, but according to YouTube, the idea is to show people more of what they want, not just on their own behavior, but on others as well, similar users as well.
‘You can find recommendations at work in two main places: your homepage and the’ Next ‘panel. Your homepage is what you see when you open YouTube for the first time – it contains a mix of personal recommendations, subscriptions and the latest news and information. The Up Next panel appears when you watch a video and suggests additional content based on what you are currently watching, along with other videos that we think you might be interested in.”
The ‘Up Next’ panel has been one of the more thorough elements of the platform over the past year, and some users have said that these recommendations may lower them rabbit holes fueled by conspiracy, and they even radicalize based on the content they find.
So how can this happen?
Here are some important notes on exactly how the YouTube recommendation process works.
YouTube’s recommendations are fundamentally based on four key elements:
- Click – The videos you click on give YouTube a direct indication of your interest in the content. But that is not always the thing that determines your experience. For example, you can click through on a video that is looking for something, and then not find it in the specific track, so the click itself is not a strong indication of what you want. That’s why YouTube also measures ‘Wait Time’ as an additional qualification.
- Waiting time – As it sounds, Wait time measures how long you actually watch each video you click, which helps YouTube tailor more specific content to suit your interests: “If a tennis fan watched 20 minutes of Wimbledon clips, and only a few seconds of match analysis video, we can safely assume that it was more valuable to watch these highlights. ”
- Share, like, dislike – YouTube also measures you share and like activity, another instant response measurement in the app. ‘Our system uses this information to predict the likelihood that you will share or like further videos. If you do not like a video, it’s a sign that you probably did not enjoy it.
- Survey Answers – Lastly, and in addition to these explicit response indicators, YouTube also conducts regular surveys of viewers to find out if users have a good in-app experience. For example, if you watch a 20-minute track, YouTube may ask you if you enjoy the track, giving it a star rating to better guide the recommendation systems.
All of these elements that you would probably be able to guess would be taken into account, so there is not necessarily much insight. While it’s also interesting to note that YouTube also tries to help you find content that you do not even know, based on the content that other people with the same viewing profiles as you watch.
“So if you like tennis videos and our system sees that others who like the same tennis videos also like jazz videos, it may be advisable that you recommend jazz videos, even if you’ve never watched a single one.”
This is probably how people get involved in these conspiracy theory avenues – you watch one video on a topic that interests you, and then YouTube hits a series of related viewers that other people have seen as a result. Falling into the wrong viewer profile can lead to a myriad of dubious things – though YouTube also notes that it works to address such recommendations and limit exposure to what it indicates as ‘low quality content’.
So, what qualifies as ‘low quality’ in this context?
‘We’ve been using recommendations to prevent low quality content from being widely viewed since 2011, when we built classifiers to identify videos that were noisy or violent and to prevent them from being recommended. Then, in 2015, we noticed that sensational tabloid content appeared on home pages and took steps to reduce it. A year later, we started predicting the likelihood of a video capturing minors in risky situations and removing it from recommendations. And in 2017, to ensure that our recommendation system is fair to marginalized communities, we began evaluating the machine learning that makes our system fair among protected groups – such as the LGBTQ + community.”
In addition to this, YouTube also ban content that includes false health claims (such as COVID conspiracy clips), while also taking more steps address political misinformation. Of course, some of this type of material still comes through, but YouTube is working to improve its systems to ensure that such material is not recommended through its discovery tools.
An important consideration in this element relates to ‘Authoritative’ or ‘borderline’ content.
To limit the reach of interfaces – those that do not necessarily violate the platform’s rules, but do contain potentially harmful material – YouTube uses human reviewers to rate the quality of information in each channel or video.
‘These judges come from all over the world and are trained by a series of detailed, publicly available judging guidelines. We also rely on certified experts, such as medical doctors, when the content involves health information. ”
To be considered ‘authoritative’, YouTube says that the reviewers answer a few key questions:
- Does the content live up to its promise or does it achieve its purpose?
- What kind of expertise is needed to achieve the video goal?
- What is the reputation of the speaker in the video and the channel it is on?
- What is the main topic of the video (eg News, Sports, History, Science, etc.)?
- Is the content primarily intended as satire?
YouTube’s reviewers rate a channel / creator based on a range of qualifiers, including online reviews, expert recommendations, news articles, and Wikipedia entries (you can see the full list of potential qualifiers) here).
All in all, the system is designed to use explicit and implicit signals to emphasize more what each person wants to see, while also filtering out the worst content to limit possible harm. The actual details of damage are a factor in this calculation and limit the reach – but again, YouTube says it is updating its recommendation tools to ensure higher quality content, at least based on these qualifiers, the app gets more exposure.
YouTube also shared this overview of how the recommendation algorithms evolved over time.
When judging the various measures from a marketing and performance perspective, the most important consideration is the response of the audience and the creation of content that appeals to your target viewers.
You can measure this in your YouTube analytics, and with users subscribing directly to your channel, there are some important, important indicators you can use to evaluate your performance and ensure you connect with viewer interests. Then your content will also be displayed to other people with similar audience characteristics, while ensuring that you have a good reputation on the site and a strong overall web presence will also limit possible penalties in the rating of YouTube moderators.
It’s also worth checking your content against the above list of ‘authority’ as a quick measure of your adherence to YouTube’s goals.
None of these elements will ensure the ultimate success, but if you do not tick the right boxes, it will limit your potential. It’s worth noting these keys, and considering every aspect of your marketing endeavor.