Recommender systems are AI-based algorithms that use different information to recommend products to customers. We can say that recommender systems are a subtype of machine learning because the algorithms “learn from their past,” i.e., use past data to predict the future.

Today, we’re exposed to vast amounts of information. The internet is overflowing with data on virtually any topic. Recommender systems are like filters that analyze the data and offer the users (you) only relevant information. Since what’s relevant to you may not interest someone else, these systems use unique criteria to provide the best results to everyone.

In this article, we’ll dig deep into recommender systems and discuss their types, applications, and challenges.

Types of Recommender Systems

Learning more about the types of recommender systems will help you understand their purpose.

Content-Based Filtering

With content-based filtering, it’s all about the features of a particular item. Algorithms pick up on specific characteristics to recommend a similar item to the user (you). Of course, the starting point is your previous actions and/or feedback.

Sounds too abstract, doesn’t it? Let’s explain it through a real-life example: movies. Suppose you’ve subscribed to a streaming platform and watched The Notebook (a romance/drama starring Ryan Gosling and Rachel McAdams). Algorithms will sniff around to investigate this movie’s properties:

  • Genre
  • Actors
  • Reviews
  • Title

Then, algorithms will suggest what to watch next and display movies with similar features. For example, you may find A Walk to Remember on your list (because it belongs to the same genre and is based on a book by the same author). But you may also see La La Land on the list (although it’s not the same genre and isn’t based on a book, it stars Ryan Gosling).

Some of the advantages of this type are:

  • It only needs data from a specific user, not a whole group.
  • It’s ideal for those who have interests that don’t fall into the mainstream category.

A potential drawback is:

  • It recommends only similar items, so users can’t really expand their interests.

Collaborative Filtering

In this case, users’ preferences and past behaviors “collaborate” with one another, and algorithms use these similarities to recommend items. We have two types of collaborative filtering: user-user and item-item.

User-User Collaborative Filtering

The main idea behind this type of recommender system is that people with similar interests and past purchases are likely to make similar selections in the future. Unlike the previous type, the focus here isn’t just on only one user but a whole group.

Collaborative filtering is popular in e-commerce, with a famous example being Amazon. It analyzes the customers’ profiles and reviews and offers recommended products using that data.

The main advantages of user-user collaborative filtering are:

  • It allows users to explore new interests and stay in the loop with trends.
  • It doesn’t need information about the specific characteristics of an item.

The biggest disadvantage is:

  • It can be overwhelmed by data volume and offer poor results.

Item-Item Collaborative Filtering

If you were ever wondering how Amazon knows you want a mint green protective case for the phone you just ordered, the answer is item-item collaborative filtering. Amazon invented this type of filtering back in 1998. With it, the e-commerce platform can make quick product suggestions and let users purchase them with ease. Here, the focus isn’t on similarities between users but between products.

Some of the advantages of item-item collaborative filtering are:

  • It doesn’t require information about the user.
  • It encourages users to purchase more products.

The main drawback is:

  • It can suffer from a decrease in performance when there’s a vast amount of data.

Hybrid Recommender Systems

As we’ve seen, both collaborative and content-based filtering have their advantages and drawbacks. Experts designed hybrid recommender systems that grab the best of both worlds. They overcome the problems behind collaborative and content-based filtering and offer better performance.

With hybrid recommender systems, algorithms take into account different factors:

  • Users’ preferences
  • Users’ past purchases
  • Users’ product ratings
  • Similarities between items
  • Current trends

A classic example of a hybrid recommender system is Netflix. Here, you’ll see the recommended content based on the TV shows and movies you’ve already watched. You can also discover content that users with similar interests enjoy and can see what’s trending at the moment.

The biggest strong points of this system are:

  • It offers precise and personalized recommendations.
  • It doesn’t have cold-start problems (poor performance due to lack of information).

The main drawback is:

  • It’s highly complex.

Machine Learning Techniques in Recommender Systems

It’s fair to say that machine learning is like the foundation stone of recommender systems. This sub-type of artificial intelligence (AI) represents the process of computers generating knowledge from data. We understand the “machine” part, but what does “learning” implicate? “Learning” means that machines improve their performance and enhance capabilities as they learn more information and become more “experienced.”

The four machine learning techniques recommender systems love are:

  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning
  • Deep learning

Supervised Learning

In this case, algorithms feed off past data to predict the future. To do that, algorithms need to know what they’re looking for in the data and what the target is. The data in which we know the target label are named labeled datasets, and they teach algorithms how to classify data or make predictions.

Supervised learning has found its place in recommender systems because it helps understand patterns and offers valuable recommendations to users. It analyzes the users’ past behavior to predict their future. Plus, supervised learning can handle large amounts of data.

The most obvious drawback of supervised learning is that it requires human involvement, and training machines to make predictions is no walk in the park. There’s also the issue of result accuracy. Whether or not the results will be accurate largely depends on the input and target values.

Unsupervised Learning

With unsupervised learning, there’s no need to “train” machines on what to look for in datasets. Instead, the machines analyze the information to discover hidden patterns or similar features. In other words, you can sit back and relax while the algorithms do their magic. There’s no need to worry about inputs and target values, and that is one of the best things about unsupervised learning.

How does this machine learning technique fit into recommender systems? The main application is exploration. With unsupervised learning, you can discover trends and patterns you didn’t even know existed. It can discover surprising similarities and differences between users and their online behavior. Simply put, unsupervised learning can perfect your recommendation strategies and make them more precise and personal.

Reinforcement Learning

Reinforcement learning is another technique used in recommender systems. It functions like a reward-punishment system, where the machine has a goal that it needs to achieve through a series of steps. The machine will try a strategy, receive back, change the strategy as necessary, and try again until it reaches the goal and gets a reward.

The most basic example of reinforcement learning in recommender systems is movie recommendations. In this case, the “reward” would be the user giving a five-star rating to the recommended movie.

Deep Learning

Deep learning is one of the most advanced (and most fascinating) subcategories of AI. The main idea behind deep learning is building neural networks that mimic and function similarly to human brains. Machines that feature this technology can learn new information and draw their own conclusions without any human assistance.

Thanks to this, deep learning offers fine-tuned suggestions to users, enhances their satisfaction, and ultimately leads to higher profits for companies that use it.

Challenges and Future Trends in Recommender Systems

Although we may not realize it, recommender systems are the driving force of online purchases and content streaming. Without them, we wouldn’t be able to discover amazing TV shows, movies, songs, and products that make our lives better, simpler, and more enjoyable.

Without a doubt, the internet would look very different if it wasn’t for recommender systems. But as you may have noticed, what you see as recommended isn’t always what you want, need, or like. In fact, the recommendations can be so wrong that you may be shocked how the internet could misinterpret you like that. Recommender systems aren’t perfect (at least not yet), and they face different challenges that affect their performance:

  • Data sparsity and scalability – If users don’t leave a trace online (don’t review items), the machines don’t have enough data to analyze and make recommendations. Likewise, the datasets change and grow constantly, which can also represent an issue.
  • Cold start problem – When new users become a part of a system, they may not receive relevant recommendations because algorithms don’t “know” their preferences, past purchases, or ratings. The same goes for new items introduced to a system.
  • Privacy and security concerns – Privacy and security are always at the spotlight of recommender systems. The situation is a paradox. The more a system knows about you, the better recommendations you’ll get. At the same time, you may not be willing to let a system learn your personal information if you want to maintain your privacy. But then, you won’t enjoy great recommendations.
  • Incorporating contextual information – Besides “typical” information, other data can help make more precise and relevant recommendations. The problem is how to incorporate them.
  • Explainability and trust – Can a recommender system explain why it made a certain recommendation, and can you trust it?

Discover New Worlds with Recommender Systems

Recommender systems are growing smarter by the day, thanks to machine learning and technological advancements. The recommendations were introduced to allow us to save time and find exactly what we’re looking for in a jiff. At the same time, they let us experiment and try something different.

While recommender systems have come a long way, there’s still more than enough room for further development.

Related posts

Sage: The ethics of AI: how to ensure your firm is fair and transparent
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Mar 7, 2025 3 min read

Source:


By Chris Torney

Artificial intelligence (AI) and machine learning have the potential to offer significant benefits and opportunities to businesses, from greater efficiency and productivity to transformational insights into customer behaviour and business performance. But it is vital that firms take into account a number of ethical considerations when incorporating this technology into their business operations. 

The adoption of AI is still in its infancy and, in many countries, there are few clear rules governing how companies should utilise the technology. However, experts say that firms of all sizes, from small and medium-sized businesses (SMBs) to international corporations, need to ensure their implementation of AI-based solutions is as fair and transparent as possible. Failure to do so can harm relationships with customers and employees, and risks causing serious reputational damage as well as loss of trust.

What are the main ethical considerations around AI?

According to Pierluigi Casale, professor in AI at the Open Institute of Technology, the adoption of AI brings serious ethical considerations that have the potential to affect employees, customers and suppliers. “Fairness, transparency, privacy, accountability, and workforce impact are at the core of these challenges,” Casale explains. “Bias remains one of AI’s biggest risks: models trained on historical data can reinforce discrimination, and this can influence hiring, lending and decision-making.”

Part of the problem, he adds, is that many AI systems operate as ‘black boxes’, which makes their decision-making process hard to understand or interpret. “Without clear explanations, customers may struggle to trust AI-driven services; for example, employees may feel unfairly assessed when AI is used for performance reviews.”

Casale points out that data privacy is another major concern. “AI relies on vast datasets, increasing the risk of breaches or misuse,” he says. “All companies operating in Europe must comply with regulations such as GDPR and the AI Act, ensuring responsible data handling to protect customers and employees.”

A third significant ethical consideration is the potential impact of AI and automation on current workforces. Businesses may need to think about their responsibilities in terms of employees who are displaced by technology, for example by introducing training programmes that will help them make the transition into new roles.

Olivia Gambelin, an AI ethicist and the founder of advisory network Ethical Intelligence, says the AI-related ethical considerations are likely to be specific to each business and the way it plans to use the technology. “It really does depend on the context,” she explains. “You’re not going to find a magical checklist of five things to consider on Google: you actually have to do the work, to understand what you are building.”

This means business leaders need to work out how their organisation’s use of AI is going to impact the people – the customers and employees – that come into contact with it, Gambelin says. “Being an AI-enabled company means nothing if your employees are unhappy and fearful of their jobs, and being an AI-enabled service provider means nothing if it’s not actually connecting with your customers.”

Read the full article below:

Read the article
Reuters: EFG Watch: DeepSeek poses deep questions about how AI will develop
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Feb 10, 2025 4 min read

Source:

  • Reuters, Published on February 10th, 2025.

By Mike Scott

Summary

  • DeepSeek challenges assumptions about AI market and raises new ESG and investment risks
  • Efficiency gains significant – similar results being achieved with less computing power
  • Disruption fuels doubts over Big Tech’s long-term AI leadership and market valuations
  • China’s lean AI model also casts doubt on costly U.S.-backed Stargate project
  • Analysts see DeepSeek as a counter to U.S. tariffs, intensifying geopolitical tensions

February 10 – The launch by Chinese company DeepSeek, opens new tab of its R1 reasoning model last month caused chaos in U.S. markets. At the same time, it shone a spotlight on a host of new risks and challenged market assumptions about how AI will develop.

The shock has since been overshadowed by President Trump’s tariff wars, opens new tab, but DeepSeek is set to have lasting and significant implications, observers say. It is also a timely reminder of why companies and investors need to consider ESG risks, and other factors such as geopolitics, in their investment strategies.

“The DeepSeek saga is a fascinating inflection point in AI’s trajectory, raising ESG questions that extend beyond energy and market concentration,” Peter Huang, co-founder of Openware AI, said in an emailed response to questions.

DeepSeek put the cat among the pigeons by announcing that it had developed its model for around $6 million, a thousandth of the cost of some other AI models, while also using far fewer chips and much less energy.

Camden Woollven, group head of AI product marketing at IT governance and compliance group GRC International, said in an email that “smaller companies and developers who couldn’t compete before can now get in the game …. It’s like we’re seeing a democratisation of AI development. And the efficiency gains are significant as they’re achieving similar results with much less computing power, which has huge implications for both costs and environmental impact.”

The impact on AI stocks and companies associated with the sector was severe. Chipmaker Nvidia lost almost $600 billion in market capitalisation after the DeepSeek announcement on fears that demand for its chips would be lower, but there was also a 20-30% drop in some energy stocks, said Stephen Deadman, UK associate partner at consultancy Sia.

As Reuters reported, power producers were among the biggest winners in the S&P 500 last year, buoyed by expectations of ballooning demand from data centres to scale artificial intelligence technologies, yet they saw the biggest-ever one-day drops after the DeepSeek announcement.

One reason for the massive sell-off was the timing – no-one was expecting such a breakthrough, nor for it to come from China. But DeepSeek also upended the prevailing narrative of how AI would develop, and who the winners would be.

Tom Vazdar, professor of cybersecurity and AI at Open Institute of Technology (OPIT), pointed out in an email that it called into question the premise behind the Stargate Project,, opens new tab a $500 billion joint venture by OpenAI, SoftBank and Oracle to build AI infrastructure in the U.S., which was announced with great fanfare by Donald Trump just days before DeepSeek’s announcement.

“Stargate has been premised on the notion that breakthroughs in AI require massive compute and expensive, proprietary infrastructure,” Vazdar said in an email.

There are also dangers in markets being dominated by such a small group of tech companies. As Abbie Llewellyn-Waters, Investment manager at Jupiter Asset Management, pointed out in a research note, the “Magnificent Seven” tech stocks had accounted for nearly 60% of the index’s gains over the previous two years. The group of mega-caps comprised more than a third of the S&P 500’s total value in December 2024.

Read the full article below:

Read the article