Human-centric cyber threats have long posed a serious issue for organizations. After all, humans are often the weakest link in the cybersecurity chain. Unfortunately, when artificial intelligence came into the mix, it only made these threats even more dangerous.

So, what can be done about these cyber threats now?

That’s precisely what we asked Tom Vazdar, the chair of the Enterprise Cybersecurity Master’s program at the Open Institute of Technology (OPIT), and Venicia Solomons, aka the “Cyber Queen.”

They dedicated a significant portion of their “Cyber Threat Landscape 2024: Navigating New Risks” master class to AI-powered human-centric cyber threats. So, let’s see what these two experts have to say on the topic.

Human-Centric Cyber Threats 101

Before exploring how AI impacted human-centric cyber threats, let’s go back to the basics. What are human-centric cyber threats?

As you might conclude from the name, human-centric cyber threats are cybersecurity risks that exploit human behavior or vulnerabilities (e.g., fear). Even if you haven’t heard of the term “human-centric cyber threats,” you’ve probably heard of (or even experienced) the threats themselves.

The most common of these threats are phishing attacks, which rely on deceptive emails to trick users into revealing confidential information (or clicking on malicious links). The result? Stolen credentials, ransomware infections, and general IT chaos.

How Has AI Impacted Human-Centric Cyber Threats?

AI has infiltrated virtually every cybersecurity sector. Social engineering is no different.

As mentioned, AI has made human-centric cyber threats substantially more dangerous. How? By making them difficult to spot.

In Venicia’s words, AI has allowed “a more personalized and convincing social engineering attack.”

In terms of email phishing, malicious actors use AI to write “beautifully crafted emails,” as Tom puts it. These emails contain no grammatical errors and can mimic the sender’s writing style, making them appear more legitimate and harder to identify as fraudulent.

These highly targeted AI-powered phishing emails are no longer considered “regular” phishing attacks but spear phishing emails, which are significantly more likely to fool their targets.

Unfortunately, it doesn’t stop there.

As AI technology advances, its capabilities go far beyond crafting a simple email. Venicia warns that AI-powered voice technology can even create convincing voice messages or phone calls that sound exactly like a trusted individual, such as a colleague, supervisor, or even the CEO of the company. Obey the instructions from these phone calls, and you’ll likely put your organization in harm’s way.

How to Counter AI-Powered Human-Centric Cyber Threats

Given how advanced human-centric cyber threats have gotten, one logical question arises – how can organizations counter them? Luckily, there are several ways to do this. Some rely on technology to detect and mitigate threats. However, most of them strive to correct what caused the issue in the first place – human behavior.

Enhancing Email Security Measures

The first step in countering the most common human-centric cyber threats is a given for everyone, from individuals to organizations. You must enhance your email security measures.

Tom provides a brief overview of how you can do this.

No. 1 – you need a reliable filtering solution. For Gmail users, there’s already one such solution in place.

No. 2 – organizations should take full advantage of phishing filters. Before, only spam filters existed, so this is a major upgrade in email security.

And No. 3 – you should consider implementing DMARC (Domain-based Message Authentication, Reporting, and Conformance) to prevent email spoofing and phishing attacks.

Keeping Up With System Updates

Another “technical” move you can make to counter AI-powered human-centric cyber threats is to ensure all your systems are regularly updated. Fail to keep up with software updates and patches, and you’re looking at a strong possibility of facing zero-day attacks. Zero-day attacks are particularly dangerous because they exploit vulnerabilities that are unknown to the software vendor, making them difficult to defend against.

Top of Form

Nurturing a Culture of Skepticism

The key component of the human-centric cyber threats is, in fact, humans. That’s why they should also be the key component in countering these threats.

At an organizational level, numerous steps are needed to minimize the risks of employees falling for these threats. But it all starts with what Tom refers to as a “culture of skepticism.”

Employees should constantly be suspicious of any unsolicited emails, messages, or requests for sensitive information.

They should always ask themselves – who is sending this, and why are they doing so?

This is especially important if the correspondence comes from a seemingly trusted source. As Tom puts it, “Don’t click immediately on a link that somebody sent you because you are familiar with the name.” He labels this as the “Rule No. 1” of cybersecurity awareness.

Growing the Cybersecurity Culture

The ultra-specific culture of skepticism will help create a more security-conscious workforce. But it’s far from enough to make a fundamental change in how employees perceive (and respond to) threats. For that, you need a strong cybersecurity culture.

Tom links this culture to the corporate culture. The organization’s mission, vision, statement of purpose, and values that shape the corporate culture should also be applicable to cybersecurity. Of course, this isn’t something companies can do overnight. They must grow and nurture this culture if they are to see any meaningful results.

According to Tom, it will probably take at least 18 months before these results start to show.

During this time, organizations must work on strengthening the relationships between every department, focusing on the human resources and security sectors. These two sectors should be the ones to primarily grow the cybersecurity culture within the company, as they’re well versed in the two pillars of this culture – human behavior and cybersecurity.

However, this strong interdepartmental relationship is important for another reason.

As Tom puts it, “[As humans], we cannot do anything by ourselves. But as a collective, with the help within the organization, we can.”

Staying Educated

The world of AI and cybersecurity have one thing in common – they never sleep. The only way to keep up with these ever-evolving worlds is to stay educated.

The best practice would be to gain a solid base by completing a comprehensive program, such as OPIT’s Enterprise Cybersecurity Master’s program. Then, it’s all about continuously learning about new developments, trends, and threats in AI and cybersecurity.

Conducting Regular Training

For most people, it’s not enough to just explain how human-centric cyber threats work. They must see them in action. Especially since many people believe that phishing attacks won’t happen to them or, if they do, they simply won’t fall for them. Unfortunately, neither of these are true.

Approximately 3.4 billion phishing emails are sent each day, and millions of them successfully bypass all email authentication methods. With such high figures, developing critical thinking among the employees is the No. 1 priority. After all, humans are the first line of defense against cyber threats.

But humans must be properly trained to counter these cyber threats. This training includes the organization’s security department sending fake phishing emails to employees to test their vigilance. Venicia calls employees who fall for these emails “clickers” and adds that no one wants to be a clicker. So, they do everything in their power to avoid falling for similar attacks in the future.

However, the key to successful employee training in this area also involves avoiding sending similar fake emails. If the company keeps trying to trick the employees in the same way, they’ll likely become desensitized and less likely to take real threats seriously.

So, Tom proposes including gamification in the training. This way, the training can be more engaging and interactive, encouraging employees to actively participate and learn. Interestingly, AI can be a powerful ally here, helping create realistic scenarios and personalized learning experiences based on employee responses.

Following in the Competitors’ Footsteps

When it comes to cybersecurity, it’s crucial to be proactive rather than reactive. Even if an organization hasn’t had issues with cyberattacks, it doesn’t mean it will stay this way. So, the best course of action is to monitor what competitors are doing in this field.

However, organizations shouldn’t stop with their competitors. They should also study other real-world social engineering incidents that might give them valuable insights into the tactics used by the malicious actors.

Tom advises visiting the many open-source databases reporting on these incidents and using the data to build an internal educational program. This gives organizations a chance to learn from other people’s mistakes and potentially prevent those mistakes from happening within their ecosystem.

Stay Vigilant

It’s perfectly natural for humans to feel curiosity when it comes to new information, anxiety regarding urgent-looking emails, and trust when seeing a familiar name pop up on the screen. But in the world of cybersecurity, these basic human emotions can cause a lot of trouble. That is, at least, when humans act on them.

So, organizations must work on correcting human behaviors, not suppressing basic human emotions. By doing so, they can help employees develop a more critical mindset when interacting with digital communications. The result? A cyber-aware workforce that’s well-equipped to recognize and respond to phishing attacks and other cyber threats appropriately.

Related posts

Agenda Digitale: AI Ethics Starts with Data – The Role of Training
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 20, 2025 6 min read

Source:


By Riccardo Ocleppo, Founder and Director of OPIT – Open Institute of Technology

AI ethics requires ongoing commitment. Organizations must integrate guidelines and a corporate culture geared towards responsibility and inclusiveness, preventing negative consequences for individuals and society.

In the world of artificial intelligence, concerns about algorithmic bias are coming to the forefront, calling for a collective effort to promote ethical practices in the development and use of AI.

This implies the need to understand the multiple causes and potential consequences of the biases themselves, identify concrete solutions and recognize the key role of academic institutions in this process.

Bias in AI is a form of injustice, often systemic, that can be embedded in algorithms. Its origins are many, but the main culprit is almost always the data set used to train the models. If this data reflects inequalities or prejudices present in society, the risk is that AI will absorb and reproduce them, consolidating these distortions.

But bias can also manifest itself in the opposite direction. This is what happened some time ago with Google Gemini. The generative AI system developed by Google, in an attempt to ensure greater inclusivity, ended up generating content and images completely disconnected from the reality it was supposed to represent.

Further complicating the picture is the very nature of AI models, which are often characterized by complex algorithms and opaque decision-making processes. This complexity makes it difficult to identify, and therefore correct, biases inherent in the systems.

Ethical Data Management to Reduce Bias in AI

Adopting good data management practices is essential to address these issues. The first step is to ensure that the datasets used for training are diverse and representative. This means actively seeking data that includes a wide variety of demographic, cultural, and social contexts, so as to avoid AI exclusively reproducing existing and potentially biased models.

Alongside data diversification, it is equally important to test models on different demographic groups. Only in this way can latent biases that would otherwise remain invisible be highlighted. Furthermore, promoting transparency in algorithms and decision-making processes is crucial. Transparency allows for critical control and makes all actors involved in the design and use of AI accountable.

Strategies for ethical and responsible artificial intelligence

Building ethical AI is not an isolated action, but an ongoing journey that requires constant attention and updating. This commitment is divided into several fundamental steps. First, ethical guidelines must be defined. Organizations must clearly establish the ethical standards to follow in the development and use of AI, inspired by fundamental values ​​such as fairness, responsibility and transparency. These principles serve as a compass to guide all projects.

It is also essential to include a plurality of perspectives in the development of AI. Multidisciplinary teams, composed of technologists, ethicists, sociologists and representatives of the potentially involved communities, can help prevent and correct biases thanks to the variety of approaches. Last but not least, promote an ethical culture : in addition to establishing rules and composing diverse teams, it is essential to cultivate a corporate culture that places ethics at the center of every project. Only by integrating these values ​​​​in the DNA of the organization can we ensure that ethics is a founding element of the development of AI.

The consequences of biased artificial intelligence

Ignoring the problem of bias can have serious and unpredictable consequences, with profound impacts on different areas of our lives. From the reinforcement of social inequalities to the loss of trust in AI-based systems, the risk is to fuel skepticism and resistance towards technological innovation. AI, if distorted, can negatively influence crucial decisions in sectors such as healthcare, employment and justice. Think, for example, of loan selection algorithms that unfairly penalize certain categories, or facial recognition software that incorrectly identifies people, with possible legal consequences. These are just some of the situations in which an unethical use of AI can worsen existing inequalities.

University training and research to counter bias in AI

Universities and higher education institutions have a crucial responsibility to address bias and promote ethical practices in AI development. Ethics must certainly be integrated into educational curricula. By including ethics modules in AI and computer science courses, universities can provide new generations of developers with the tools to recognize and address bias, contributing to more equitable and inclusive design. Universities can also be protagonists through research.

Academic institutions, with their autonomy and expertise, can explore the complexities of bias in depth, developing innovative solutions for detecting and mitigating bias. Since the topic of bias is multidimensional in nature, a collaborative approach is needed, thus fostering interdisciplinary collaboration. Universities can create spaces where computer scientists, ethicists, lawyers, and social scientists work together, offering more comprehensive and innovative solutions.

But that’s not all. As places of critical thinking and debate, universities can foster dialogue between developers, policy makers, and citizens through events, workshops, and conferences. This engagement is essential to raise awareness and promote responsible use of AI.

In this direction, several universities have already activated degree courses in artificial intelligence that combine advanced technical skills (in areas such as machine learning, computer vision and natural language processing) with training that is attentive to ethical and human implications.

Academic Opportunities for an Equitable AI Future

More and more universities around the world – including Yale and Oxford – are also creating research departments dedicated to AI and ethics.

The path to ethical AI is complex, but it also represents an opportunity to build a future where technology truly serves the common good.

By recognizing the root causes of bias , adopting responsible data practices, and engaging in ongoing and vigilant development, we can reduce the unintended effects of biased algorithms. In this process, academic institutions – thanks to their expertise and authority – are at the forefront, helping to shape a more equitable and inclusive digital age.

Read the full article below:

Read the article
TechFinancials: Are We Raising AI Correctly?
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 20, 2025 5 min read

Source:


By Zorina Alliata

Artificial intelligence (AI) used to be the stuff of science fiction. Stories about rogue machines and robot uprisings were once a source of amusement, not anxiety. But over recent years, AI has quietly embedded itself in our daily lives.

From the algorithms behind social media feeds, to the voice assistants managing our calendars. This quiet takeover has become something far louder: fear.

Headlines around AI are often alarmist. Statements such as “AI will take your job”, “AI will end education”, or “AI is dangerous and unregulated” are thrown around regularly. These narratives feed on uncertainty and fuel distrust.

But it doesn’t have to be this way. The hyper-fixation on the never-ending negative aspects of AI is the wrong approach to take. What if AI isn’t the villain? What if, at this stage, it’s simply a child?

AI, in many ways, is still learning. It mimics human behaviour, absorbs language, and forms patterns based on what it sees. Its current capabilities, however powerful they may seem, are not equivalent to human intelligence. It has limitations. It makes mistakes. It can even be manipulated and misled. It reflects our world; flaws and all. In that sense, AI is less an omnipotent force and more in line with a toddler trying to find its way.

And, like any child, it needs guidance.

This is especially evident in education. The emergence of AI tools such as ChatGPT has caused a stir in higher education institutions and universities, sparking fears about plagiarism and the erosion of critical thinking. Some institutions have responded with strict bans, while others have embraced cautious integration. The panic is understandable, but is it misplaced?

Rather than jumping to conclusions, educators should consider shifting the conversation. AI can, in fact, become an ally in learning. Instead of assuming students will cheat, we can teach them to use AI responsibly. Most of us educators can already recognise the signs of AI-generated work: excessive use of numbered lists, repetitive language and poor comparison skills. So why not use this as a teaching opportunity?

Encouraging students to engage with AI critically, understanding what it’s good at, where it falls short, and how to improve its output, can strengthen their own analytical skills. It invites them to become more active participants in their learning, not passive consumers of machine generated answers. Teaching young people how to work with AI is arguably more important than shielding them from it.

Outside the classroom, AI’s impact on the workforce is another growing concern. Stories about AI replacing jobs often dominate the news cycle. But these conversations often ignore a key point: AI is not autonomous. AI needs human designers, engineers, analysts, and ethicists to guide it. For every job that AI may eliminate, others will emerge to support and direct it.

More importantly, there are many things AI simply cannot do. It doesn’t understand nuance, morality or emotion. It can’t make ethical decisions without human input. These aren’t minor gaps, they’re fundamental. That’s why we must stop imagining AI as an unstoppable force and start thinking about how to raise it responsibly.

When considering how to raise our AI child responsibly, we need to acknowledge the issue of the algorithm being biased. Critics often point out that AI reproduces prejudices and errors, and whilst this is true, the source of that bias is us. It is important to remember that AI learns from historical data created by us, much of which reflects deeply ingrained societal inequalities.

Take, for example, mortgage lending in the US, where decades of discriminatory practices have skewed the data. Unless we intervene, AI trained on this information will inevitably reflect those same biases.

That’s not a reason to reject AI. It’s a reason to be more involved in its development, like any good parent. The responsibility lies with us.

Parenting is not about control for control’s sake; it’s about nurturing growth while setting boundaries. AI, like a child, needs feedback, accountability, and care. It will grow, but how it grows is up to us.

It’s tempting to view technology as something that happens to us, rather than something we can shape. But AI doesn’t exist outside of society, it’s a product of our values, decisions, and input. If we treat it as a monster, it may become one. If we treat it as a mirror, it will reflect what we show it. And if we treat it as a child, we may be able to raise it into something better.

So instead of fearmongering, let’s ask ourselves a better question: Are we raising AI correctly?

Read the full article below:

Read the article