The Magazine

The Magazine

👩‍💻 Welcome to OPIT’s blog! You will find relevant news on the education and computer science industry.

Agenda Digitale: AI Ethics Starts with Data – The Role of Training
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 20, 2025

Source:


By Riccardo Ocleppo, Founder and Director of OPIT – Open Institute of Technology

AI ethics requires ongoing commitment. Organizations must integrate guidelines and a corporate culture geared towards responsibility and inclusiveness, preventing negative consequences for individuals and society.

In the world of artificial intelligence, concerns about algorithmic bias are coming to the forefront, calling for a collective effort to promote ethical practices in the development and use of AI.

This implies the need to understand the multiple causes and potential consequences of the biases themselves, identify concrete solutions and recognize the key role of academic institutions in this process.

Bias in AI is a form of injustice, often systemic, that can be embedded in algorithms. Its origins are many, but the main culprit is almost always the data set used to train the models. If this data reflects inequalities or prejudices present in society, the risk is that AI will absorb and reproduce them, consolidating these distortions.

But bias can also manifest itself in the opposite direction. This is what happened some time ago with Google Gemini. The generative AI system developed by Google, in an attempt to ensure greater inclusivity, ended up generating content and images completely disconnected from the reality it was supposed to represent.

Further complicating the picture is the very nature of AI models, which are often characterized by complex algorithms and opaque decision-making processes. This complexity makes it difficult to identify, and therefore correct, biases inherent in the systems.

Ethical Data Management to Reduce Bias in AI

Adopting good data management practices is essential to address these issues. The first step is to ensure that the datasets used for training are diverse and representative. This means actively seeking data that includes a wide variety of demographic, cultural, and social contexts, so as to avoid AI exclusively reproducing existing and potentially biased models.

Alongside data diversification, it is equally important to test models on different demographic groups. Only in this way can latent biases that would otherwise remain invisible be highlighted. Furthermore, promoting transparency in algorithms and decision-making processes is crucial. Transparency allows for critical control and makes all actors involved in the design and use of AI accountable.

Strategies for ethical and responsible artificial intelligence

Building ethical AI is not an isolated action, but an ongoing journey that requires constant attention and updating. This commitment is divided into several fundamental steps. First, ethical guidelines must be defined. Organizations must clearly establish the ethical standards to follow in the development and use of AI, inspired by fundamental values ​​such as fairness, responsibility and transparency. These principles serve as a compass to guide all projects.

It is also essential to include a plurality of perspectives in the development of AI. Multidisciplinary teams, composed of technologists, ethicists, sociologists and representatives of the potentially involved communities, can help prevent and correct biases thanks to the variety of approaches. Last but not least, promote an ethical culture : in addition to establishing rules and composing diverse teams, it is essential to cultivate a corporate culture that places ethics at the center of every project. Only by integrating these values ​​​​in the DNA of the organization can we ensure that ethics is a founding element of the development of AI.

The consequences of biased artificial intelligence

Ignoring the problem of bias can have serious and unpredictable consequences, with profound impacts on different areas of our lives. From the reinforcement of social inequalities to the loss of trust in AI-based systems, the risk is to fuel skepticism and resistance towards technological innovation. AI, if distorted, can negatively influence crucial decisions in sectors such as healthcare, employment and justice. Think, for example, of loan selection algorithms that unfairly penalize certain categories, or facial recognition software that incorrectly identifies people, with possible legal consequences. These are just some of the situations in which an unethical use of AI can worsen existing inequalities.

University training and research to counter bias in AI

Universities and higher education institutions have a crucial responsibility to address bias and promote ethical practices in AI development. Ethics must certainly be integrated into educational curricula. By including ethics modules in AI and computer science courses, universities can provide new generations of developers with the tools to recognize and address bias, contributing to more equitable and inclusive design. Universities can also be protagonists through research.

Academic institutions, with their autonomy and expertise, can explore the complexities of bias in depth, developing innovative solutions for detecting and mitigating bias. Since the topic of bias is multidimensional in nature, a collaborative approach is needed, thus fostering interdisciplinary collaboration. Universities can create spaces where computer scientists, ethicists, lawyers, and social scientists work together, offering more comprehensive and innovative solutions.

But that’s not all. As places of critical thinking and debate, universities can foster dialogue between developers, policy makers, and citizens through events, workshops, and conferences. This engagement is essential to raise awareness and promote responsible use of AI.

In this direction, several universities have already activated degree courses in artificial intelligence that combine advanced technical skills (in areas such as machine learning, computer vision and natural language processing) with training that is attentive to ethical and human implications.

Academic Opportunities for an Equitable AI Future

More and more universities around the world – including Yale and Oxford – are also creating research departments dedicated to AI and ethics.

The path to ethical AI is complex, but it also represents an opportunity to build a future where technology truly serves the common good.

By recognizing the root causes of bias , adopting responsible data practices, and engaging in ongoing and vigilant development, we can reduce the unintended effects of biased algorithms. In this process, academic institutions – thanks to their expertise and authority – are at the forefront, helping to shape a more equitable and inclusive digital age.

Read the full article below:

Read the article
TechFinancials: Are We Raising AI Correctly?
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 20, 2025

Source:


By Zorina Alliata

Artificial intelligence (AI) used to be the stuff of science fiction. Stories about rogue machines and robot uprisings were once a source of amusement, not anxiety. But over recent years, AI has quietly embedded itself in our daily lives.

From the algorithms behind social media feeds, to the voice assistants managing our calendars. This quiet takeover has become something far louder: fear.

Headlines around AI are often alarmist. Statements such as “AI will take your job”, “AI will end education”, or “AI is dangerous and unregulated” are thrown around regularly. These narratives feed on uncertainty and fuel distrust.

But it doesn’t have to be this way. The hyper-fixation on the never-ending negative aspects of AI is the wrong approach to take. What if AI isn’t the villain? What if, at this stage, it’s simply a child?

AI, in many ways, is still learning. It mimics human behaviour, absorbs language, and forms patterns based on what it sees. Its current capabilities, however powerful they may seem, are not equivalent to human intelligence. It has limitations. It makes mistakes. It can even be manipulated and misled. It reflects our world; flaws and all. In that sense, AI is less an omnipotent force and more in line with a toddler trying to find its way.

And, like any child, it needs guidance.

This is especially evident in education. The emergence of AI tools such as ChatGPT has caused a stir in higher education institutions and universities, sparking fears about plagiarism and the erosion of critical thinking. Some institutions have responded with strict bans, while others have embraced cautious integration. The panic is understandable, but is it misplaced?

Rather than jumping to conclusions, educators should consider shifting the conversation. AI can, in fact, become an ally in learning. Instead of assuming students will cheat, we can teach them to use AI responsibly. Most of us educators can already recognise the signs of AI-generated work: excessive use of numbered lists, repetitive language and poor comparison skills. So why not use this as a teaching opportunity?

Encouraging students to engage with AI critically, understanding what it’s good at, where it falls short, and how to improve its output, can strengthen their own analytical skills. It invites them to become more active participants in their learning, not passive consumers of machine generated answers. Teaching young people how to work with AI is arguably more important than shielding them from it.

Outside the classroom, AI’s impact on the workforce is another growing concern. Stories about AI replacing jobs often dominate the news cycle. But these conversations often ignore a key point: AI is not autonomous. AI needs human designers, engineers, analysts, and ethicists to guide it. For every job that AI may eliminate, others will emerge to support and direct it.

More importantly, there are many things AI simply cannot do. It doesn’t understand nuance, morality or emotion. It can’t make ethical decisions without human input. These aren’t minor gaps, they’re fundamental. That’s why we must stop imagining AI as an unstoppable force and start thinking about how to raise it responsibly.

When considering how to raise our AI child responsibly, we need to acknowledge the issue of the algorithm being biased. Critics often point out that AI reproduces prejudices and errors, and whilst this is true, the source of that bias is us. It is important to remember that AI learns from historical data created by us, much of which reflects deeply ingrained societal inequalities.

Take, for example, mortgage lending in the US, where decades of discriminatory practices have skewed the data. Unless we intervene, AI trained on this information will inevitably reflect those same biases.

That’s not a reason to reject AI. It’s a reason to be more involved in its development, like any good parent. The responsibility lies with us.

Parenting is not about control for control’s sake; it’s about nurturing growth while setting boundaries. AI, like a child, needs feedback, accountability, and care. It will grow, but how it grows is up to us.

It’s tempting to view technology as something that happens to us, rather than something we can shape. But AI doesn’t exist outside of society, it’s a product of our values, decisions, and input. If we treat it as a monster, it may become one. If we treat it as a mirror, it will reflect what we show it. And if we treat it as a child, we may be able to raise it into something better.

So instead of fearmongering, let’s ask ourselves a better question: Are we raising AI correctly?

Read the full article below:

Read the article
Wired: Think Twice Before Creating That ChatGPT Action Figure
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025

Source:

  • Wired, published on May 01st, 2025

People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.

By Kate O’Flaherty

At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.

All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.

The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.

Hidden Data

The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”

OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.

This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.

OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.

Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.

Uncanny Likeness

In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.

However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.

Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.

The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”

OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.

Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.

ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data⁠ by default, according to the company.

Read the full article below:

Read the article
LADBible and Yahoo News: Viral AI trend could present huge privacy concerns, says expert
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025

Source:


You’ve probably seen them all over Instagram

By James Moorhouse

Experts have warned against participating in a viral social media trend which sees people use ChatGPT to create an action figure version of themselves.

If you’ve spent any time whatsoever doomscrolling on Instagram or TikTok or dare I say it, LinkedIn recently, you’ll be all too aware of the viral trend.

Obviously, there’s nothing more entertaining and frivolous than seeing AI generated versions of your co-workers and their cute little laptops and piña coladas, but it turns out that it might not be the best idea to take part.

There may well be some benefits to artificial intelligence but often it can produce some pretty disturbing results. Earlier this year, a lad from Norway sued ChatGPT after it falsely claimed he had been convicted of killing two of his kids.

Unfortunately, if you don’t like AI, then you’re going to have to accept that it’s going to become a regular part of our lives. You only need to look at WhatsApp or Facebook messenger to realise that. But it’s always worth saying please and thank you to ChatGPT just in case society does collapse and the AI robots take over, in the hope that they treat you mercifully. Although it might cost them a little more electricity.

Anyway, in case you’re thinking of getting involved in this latest AI trend and sharing your face and your favourite hobbies with a high tech robot, maybe don’t. You don’t want to end up starring in your own Netflix series, à la Black Mirror.

Tom Vazdar, area chair for cybersecurity at Open Institute of Technology, spoke with Wired about some of the dangers of sharing personal details about yourself with AI.

Every time you upload an image to ChatGPT, you’re potentially handing over ‘an entire bundle of metadata’ he revealed.

Vazdar added: “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.

“Because platforms like ChatGPT operate conversationally, there’s also behavioural data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

Essentially, if you upload a photo of your face, you’re not just giving AI access to your face, but also the whatever is in the background, such as the location or other people that might feature.

Vazdar concluded: “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

While we’re at it, maybe stop using ChatGPT for your university essays and general basic questions you can find the answer to on Google as well. The last thing you need is AI knowing you don’t know how to do something basic if it does takeover the world.

Read the full article below:

Read the article
Is Your Degree Fit for Purpose: Graduate From University 2 Business
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 06, 2025

It’s not uncommon to hear stories from people who have committed several years to obtaining a university degree, only to discover it doesn’t fit the purposes they need when entering the business world.

Why? Even though universities spend years developing their degree courses in areas such as economics, business, and biomedical science, it is challenging to keep up with the latest technological advancements due to the lengthy approval process and a lack of experts on staff.

Today, artificial intelligence (AI), big data, cloud computing, and cybersecurity are beginning to impact every aspect of our business lives, regardless of whether you work in a cutting-edge science lab or an antiquities museum. However, many graduates fail to leverage this new technology and adapt it to their careers.

This is why OPIT – the Open Institute of Technology – was born, to offer affordable and accessible courses that bridge the gap between what is taught in traditional universities and what the job market requires.

How Is the Job Market Changing?

According to the World Economic Forum’s Future of Jobs Report 2025, 92 million jobs will be displaced by new technologies, though 170 million new jobs will be created that utilize new technology.

The report suggests that 39% of the key skills required in the job market will change by 2030. These include hard technical skills and the soft skills needed to work in creative environments where change is a constant.

New job descriptions will look for big data specialists, fintech engineers, and AI and machine learning specialists. Additionally, employers will also be seeking creative thinkers who are flexible and agile, as well as resilient in the face of change.

Technology-focused jobs that are in increasing demand include:

  • Machine Learning Engineer – Developing and refining algorithms that enable systems to learn from data and improve performance.
  • Natural Language Processing Specialist – Developing chatbots that can understand users, communicate naturally, and provide valuable assistance.
  • AI Ethicist – Ensuring that AI is developed and deployed with broader social, legal, and moral implications considered.
  • Data Architect – Gathering raw data from different sources and designing infrastructure that consolidates this information and makes it usable.
  • Chief Data Officer – Leading a company’s data collection and application strategy, ensuring data-driven decision-makers.
  • Cybersecurity Engineer – Building information security systems and IT architecture, and protecting them from unauthorized access and cyberattacks.

Over the next few years, we can expect most jobs to require an understanding of the applications for cutting-edge technology, if not how to manage the technical backend. Leaders need to know how to implement AI and automation to save time and reduce errors. Researchers need to understand how to leverage data to reveal new findings, and everyone needs to understand how to work in secure digital environments.

The conclusion is that in tomorrow’s job market, workers will need to find the right balance of technical and human skills to thrive.

A New Approach to Learning Is Needed

Learning requires a fundamental change. Just as businesses need to be adaptable, places of higher learning need to be more adaptable too, keeping their offerings up-to-date and reducing the timescales required to accredit and deliver new courses fit for the current job market.

This aligns with OPIT’s mission to unlock progress and employment on a global scale by providing high-quality and affordable education in the field of technology.

How Does OPIT Work?

OPIT is accredited with the MFHEA (Malta Further and Higher Education Authority) in accordance with the European Qualifications Framework (EQF).

Working with an evolving faculty of experts, OPIT offers a technological education aligned with the current and future career market.

Currently, OPIT offers two Bachelor’s degrees:

  • Digital Business – Focuses on merging business acumen with digital fluency, bridging the strategy-execution gap in the evolving digital age.
  • Modern Computer Science – Establishes 360-degree foundation skills, both theoretical and applicative, in all aspects of today’s computer science. It includes programming, software development, the cloud, cybersecurity, data science, and AI.

OPIT also offers four Master’s degrees:

  • Digital Business & Innovation – Empowers professionals to drive innovation by leveraging digital technologies and AI, covering topics such as strategy, digital marketing, customer value management, and AI applications.
  • Responsible Artificial Intelligence – Combines technical expertise with a focus on the ethical implications of modern AI, including sustainability and environmental impact.
  • Enterprise Cybersecurity – Integrates technical and managerial expertise, equipping students with the skills to implement security solutions and lead cybersecurity initiatives.
  • Applied Data Science & AI – Focuses on the intersection between management and tech with no computer science prerequisites. It provides foundation applicative courses coupled with real-world business problems approached with data science and AI.

Courses offer flexible online learning, with both live online-native classes and recorded catch-up sessions. Every course is hands-on and career-aligned, preparing students for multiple career options while working with top professionals.

Current faculty members include Zorina Alliata, principal AI strategist at Amazon; Sylvester Kaczmarek, AI mentor and researcher at NASA; Andrea Gozzi, head of Strategy and Partnership for the Digital Industries Ecosystem at Siemens; and Raj Dasgupta, AI and machine learning scientist at the U.S. Naval Research Laboratory.

OPIT designs its courses to be accessible and affordable, with a dedicated career services department that offers one-on-one career coaching and advice.

Graduating From OPIT

OPIT recently held its first graduation ceremony for students in 2025. Students described their experience with OPIT as unique, innovative, and inspiring. Share the experience of OPIT’s very first graduates in the video here.

If you are curious to learn more about the OPIT student community, OPIT can connect you with a current student. Just reach out.

Read the article
Master the AI Era: Key Skills for Success
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
April 24, 2025

The world is rapidly changing. New technologies such as artificial intelligence (AI) are transforming our lives and work, redefining the definition of “essential office skills.”

So what essential skills do today’s workers need to thrive in a business world undergoing a major digital transformation? It’s a question that Alan Lerner, director at Toptal and lecturer at the Open Institute of Technology (OPIT), addressed in his recent online masterclass.

In a broad overview of the new office landscape, Lerner shares the essential skills leaders need to manage – including artificial intelligence – to keep abreast of trends.

Here are eight essential capabilities business leaders in the AI era need, according to Lerner, which he also detailed in OPIT’s recent Master’s in Digital Business and Innovation webinar.

An Adapting Professional Environment

Lerner started his discussion by quoting naturalist Charles Darwin.

“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.”

The quote serves to highlight the level of change that we are currently seeing in the professional world, said Lerner.

According to the World Economic Forum’s The Future of Jobs Report 2025, over the next five years 22% of the labor market will be affected by structural change – including job creation and destruction – and much of that change will be enabled by new technologies such as AI and robotics. They expect the displacement of 92 million existing jobs and the creation of 170 million new jobs by 2030.

While there will be significant growth in frontline jobs – such as delivery drivers, construction workers, and care workers – the fastest-growing jobs will be tech-related roles, including big data specialists, FinTech engineers, and AI and machine learning specialists, while the greatest decline will be in clerical and secretarial roles. The report also predicts that most workers can anticipate that 39% of their existing skill set will be transformed or outdated in five years.

Lerner also highlighted key findings in the Accenture Life Trends 2025 Report, which explores behaviors and attitudes related to business, technology, and social shifts. The report noted five key trends:

  • Cost of Hesitation – People are becoming more wary of the information they receive online.
  • The Parent Trap – Parents and governments are increasingly concerned with helping the younger generation shape a safe relationship with digital technology.
  • Impatience Economy – People are looking for quick solutions over traditional methods to achieve their health and financial goals.
  • The Dignity of Work – Employees desire to feel inspired, to be entrusted with agency, and to achieve a work-life balance.
  • Social Rewilding – People seek to disconnect and focus on satisfying activities and meaningful interactions.

These are consumer and employee demands representing opportunities for change in the modern business landscape.

Key Capabilities for the AI Era

Businesses are using a variety of strategies to adapt, though not always strategically. According to McClean & Company’s HR Trends Report 2025, 42% of respondents said they are currently implementing AI solutions, but only 7% have a documented AI implementation strategy.

This approach reflects the newness of the technology, with many still unsure of the best way to leverage AI, but also feeling the pressure to adopt and adapt, experiment, and fail forward.

So, what skills do leaders need to lead in an environment with both transformation and uncertainty? Lerner highlighted eight essential capabilities, independent of technology.

Capability 1: Manage Complexity

Leaders need to be able to solve problems and make decisions under fast-changing conditions. This requires:

  • Being able to look at and understand organizations as complex social-technical systems
  • Keeping a continuous eye on change and adopting an “outside-in” vision of their organization
  • Moving fast and fixing things faster
  • Embracing digital literacy and technological capabilities

Capability 2: Leverage Networks

Leaders need to develop networks systematically to achieve organizational goals because it is no longer possible to work within silos. Leaders should:

  • Use networks to gain insights into complex problems
  • Create networks to enhance influence
  • Treat networks as mutually rewarding relationships
  • Develop a robust profile that can be adapted for different networks

Capability 3: Think and Act “Global”

Leaders should benchmark using global best practices but adapt them to local challenges and the needs of their organization. This requires:

  • Identifying what great companies are achieving and seeking data to understand underlying patterns
  • Developing perspectives to craft global strategies that incorporate regional and local tactics
  • Learning how to navigate culturally complex and nuanced business solutions

Capability 4: Inspire Engagement

Leaders must foster a culture that creates meaningful connections between employees and organizational values. This means:

  • Understanding individual values and needs
  • Shaping projects and assignments to meet different values and needs
  • Fostering an inclusive work environment with plenty of psychological safety
  • Developing meaningful conversations and both providing and receiving feedback
  • Sharing advice and asking for help when needed

Capability 5: Communicate Strategically

Leaders should develop crisp, clear messaging adaptable to various audiences and focus on active listening. Achieving this involves:

  • Creating their communication style and finding their unique voice
  • Developing storytelling skills
  • Utilizing a data-centric and fact-based approach to communication
  • Continual practice and asking for feedback

Capability 6: Foster Innovation

Leaders should collaborate with experts to build a reliable innovation process and a creative environment where new ideas thrive. Essential steps include:

  • Developing or enhancing structures that best support innovation
  • Documenting and refreshing innovation systems, processes, and practices
  • Encouraging people to discover new ways of working
  • Aiming to think outside the box and develop a growth mindset
  • Trying to be as “tech-savvy” as possible

Capability 7: Cultivate Learning Agility

Leaders should always seek out and learn new things and not be afraid to ask questions. This involves:

  • Adopting a lifelong learning mindset
  • Seeking opportunities to discover new approaches and skills
  • Enhancing problem-solving skills
  • Reviewing both successful and unsuccessful case studies

Capability 8: Develop Personal Adaptability

Leaders should be focused on being effective when facing uncertainty and adapting to change with vigor. Therefore, leaders should:

  • Be flexible about their approach to facing challenging situations
  • Build resilience by effectively managing stress, time, and energy
  • Recognize when past approaches do not work in current situations
  • Learn from and capitalize on mistakes

Curiosity and Adaptability

With the eight key capabilities in mind, Lerner suggests that curiosity and adaptability are the key skills that everyone needs to thrive in the current environment.

He also advocates for lifelong learning and teaches several key courses at OPIT which can lead to a Bachelor’s Degree in Digital Business.

Read the article
Lessons From History: How Fraud Tactics From the 18th Century Still Impact Us Today
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
April 17, 2025

Many people treat cyber threats and digital fraud as a new phenomenon that only appeared with the development of the internet. But fraud – intentional deceit to manipulate a victim – has always existed; it is just the tools that have changed.

In a recent online course for the Open Institute of Technology (OPIT), AI & Cybersecurity Strategist Tom Vazdar, chair of OPIT’s Master’s Degree in Enterprise Cybersecurity, demonstrated the striking parallels between some of the famous fraud cases of the 18th century and modern cyber fraud.

Why does the history of fraud matter?

Primarily because the psychology and fraud tactics have remained consistent over the centuries. While cybersecurity is a tool that can combat modern digital fraud threats, no defense strategy will be successful without addressing the underlying psychology and tactics.

These historical fraud cases Vazdar addresses offer valuable lessons for current and future cybersecurity approaches.

The South Sea Bubble (1720)

The South Sea Bubble was one of the first stock market crashes in history. While it may not have had the same far-reaching consequences as the Black Thursday crash of 1929 or the 2008 crash, it shows how fraud can lead to stock market bubbles and advantages for insider traders.

The South Sea Company was a British company that emerged to monopolize trade with the Spanish colonies in South America. The company promised investors significant returns but provided no evidence of its activities. This saw the stock prices grow from £100 to £1,000 in a matter of months, then crash when the company’s weakness was revealed.

Many people lost a significant amount of money, including Sir Isaac Newton, prompting the statement, “I can calculate the movement of the stars, but not the madness of men.

Investors often have no way to verify a company’s claim, making stock markets a fertile ground for manipulation and fraud since their inception. When one party has more information than another, it creates the opportunity for fraud. This can be seen today in Ponzi schemes, tech stock bubbles driven by manipulative media coverage, and initial cryptocurrency offerings.

The Diamond Necklace Affair (1784-1785)

The Diamond Necklace Affair is an infamous incident of fraud linked to the French Revolution. An early example of identity theft, it also demonstrates that the harm caused by such a crime can go far beyond financial.

A French aristocrat named Jeanne de la Mont convinced Cardinal Louis-René-Édouard, Prince de Rohan into thinking that he was buying a valuable diamond necklace on behalf of Queen Marie Antoinette. De la Mont forged letters from the queen and even had someone impersonate her for a meeting, all while convincing the cardinal of the need for secrecy. The cardinal overlooked several questionable issues because he believed he would gain political benefit from the transaction.

When the scheme finally exposed, it damaged Marie Antoinette’s reputation, despite her lack of involvement in the deception. The story reinforced the public perception of her as a frivolous aristocrat living off the labor of the people. This contributed to the overall resentment of the aristocracy that erupted in the French Revolution and likely played a role in Marie Antoinette’s death. Had she not been seen as frivolous, she might have been allowed to live after her husband’s death.

Today, impersonation scams work in similar ways. For example, a fraudster might forge communication from a CEO to convince employees to release funds or take some other action. The risk of this is only increasing with improved technology such as deepfakes.

Spanish Prisoner Scam (Late 1700s)

The Spanish Prisoner Scam will probably sound very familiar to anyone who received a “Nigerian prince” email in the early 2000s.

Victims received letters from a “wealthy Spanish prisoner” who needed their help to access his fortune. If they sent money to facilitate his escape and travel, he would reward them with greater riches when he regained his fortune. This was only one of many similar scams in the 1700s, often involving follow-up requests for additional payments before the scammer disappeared.

While the “Nigerian prince” scam received enough publicity that it became almost unbelievable that people could fall for it, if done well, these can be psychologically sophisticated scams. The stories play on people’s emotions, get them invested in the person, and enamor them with the idea of being someone helpful and important. A compelling narrative can diminish someone’s critical thinking and cause them to ignore red flags.

Today, these scams are more likely to take the form of inheritance fraud or a lottery scam, where, again, a person has to pay an advance fee to unlock a much bigger reward, playing on the common desire for easy money.

Evolution of Fraud

These examples make it clear that fraud is nothing new and that effective tactics have thrived over the centuries. Technology simply opens up new opportunities for fraud.

While 18th-century scammers had to rely on face-to-face contact and fraudulent letters, in the 19th century they could leverage the telegraph for “urgent” communication and newspaper ads to reach broader audiences. In the 20th century, there were telephones and television ads. Today, there are email, social media, and deepfakes, with new technologies emerging daily.

Rather than quack doctors offering miracle cures, we see online health scams selling diet pills and antiaging products. Rather than impersonating real people, we see fake social media accounts and catfishing. Fraudulent sites convince people to enter their bank details rather than asking them to send money. The anonymity of the digital world protects perpetrators.

But despite the technology changing, the underlying psychology that makes scams successful remains the same:

  • Greed and the desire for easy money
  • Fear of missing out and the belief that a response is urgent
  • Social pressure to “keep up with the Joneses” and the “Bandwagon Effect”
  • Trust in authority without verification

Therefore, the best protection against scams remains the same: critical thinking and skepticism, not technology.

Responding to Fraud

In conclusion, Vazdar shared a series of steps that people should take to protect themselves against fraud:

  • Think before you click.
  • Beware of secrecy and urgency.
  • Verify identities.
  • If it seems too good to be true, be skeptical.
  • Use available security tools.

Those security tools have changed over time and will continue to change, but the underlying steps for identifying and preventing fraud remain the same.

For more insights from Vazdar and other experts in the field, consider enrolling in highly specialized and comprehensive programs like OPIT’s Enterprise Security Master’s program.

Read the article
Il Sole 24 Ore: Integrating Artificial Intelligence into the Enterprise – Challenges and Opportunities for CEOs and Management
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
April 14, 2025

Source:


Expert Pierluigi Casale analyzes the adoption of AI by companies, the ethical and regulatory challenges and the differentiated approach between large companies and SMEs

By Gianni Rusconi

Easier said than done: to paraphrase the well-known proverb, and to place it in the increasingly large collection of critical issues and opportunities related to artificial intelligence, the task that CEOs and management have to adequately integrate this technology into the company is indeed difficult. Pierluigi Casale, professor at OPIT (Open Institute of Technology, an academic institution founded two years ago and specialized in the field of Computer Science) and technical consultant to the European Parliament for the implementation and regulation of AI, is among those who contributed to the definition of the AI ​​Act, providing advice on aspects of safety and civil liability. His task, in short, is to ensure that the adoption of artificial intelligence (primarily within the parliamentary committees operating in Brussels) is not only efficient, but also ethical and compliant with regulations. And, obviously, his is not an easy task.

The experience gained over the last 15 years in the field of machine learning and the role played in organizations such as Europol and in leading technology companies are the requirements that Casale brings to the table to balance the needs of EU bodies with the pressure exerted by American Big Tech and to preserve an independent approach to the regulation of artificial intelligence. A technology, it is worth remembering, that implies broad and diversified knowledge, ranging from the regulatory/application spectrum to geopolitical issues, from computational limitations (common to European companies and public institutions) to the challenges related to training large-format language models.

CEOs and AI

When we specifically asked how CEOs and C-suites are “digesting” AI in terms of ethics, safety and responsibility, Casale did not shy away, framing the topic based on his own professional career. “I have noticed two trends in particular: the first concerns companies that started using artificial intelligence before the AI ​​Act and that today have the need, as well as the obligation, to adapt to the new ethical framework to be compliant and avoid sanctions; the second concerns companies, like the Italian ones, that are only now approaching this topic, often in terms of experimental and incomplete projects (the expression used literally is “proof of concept”, ed.) and without these having produced value. In this case, the ethical and regulatory component is integrated into the adoption process.”

In general, according to Casale, there is still a lot to do even from a purely regulatory perspective, due to the fact that there is not a total coherence of vision among the different countries and there is not the same speed in implementing the indications. Spain, in this regard, is setting an example, having established (with a royal decree of 8 November 2023) a dedicated “sandbox”, i.e. a regulatory experimentation space for artificial intelligence through the creation of a controlled test environment in the development and pre-marketing phase of some artificial intelligence systems, in order to verify compliance with the requirements and obligations set out in the AI ​​Act and to guide companies towards a path of regulated adoption of the technology.

Read the full article below (in Italian):

Read the article