Read the full article below:
Data Science & AI
Dive deep into data-driven technologies: Machine Learning, Reinforcement Learning, Data Mining, Big Data, NLP & more. Stay updated.
Search inside The Magazine
Source:
- Agenda Digitale, published on May 16th, 2025
By Riccardo Ocleppo, Founder and Director of OPIT – Open Institute of Technology
AI ethics requires ongoing commitment. Organizations must integrate guidelines and a corporate culture geared towards responsibility and inclusiveness, preventing negative consequences for individuals and society.
In the world of artificial intelligence, concerns about algorithmic bias are coming to the forefront, calling for a collective effort to promote ethical practices in the development and use of AI.
This implies the need to understand the multiple causes and potential consequences of the biases themselves, identify concrete solutions and recognize the key role of academic institutions in this process.
Bias in AI is a form of injustice, often systemic, that can be embedded in algorithms. Its origins are many, but the main culprit is almost always the data set used to train the models. If this data reflects inequalities or prejudices present in society, the risk is that AI will absorb and reproduce them, consolidating these distortions.
But bias can also manifest itself in the opposite direction. This is what happened some time ago with Google Gemini. The generative AI system developed by Google, in an attempt to ensure greater inclusivity, ended up generating content and images completely disconnected from the reality it was supposed to represent.
Further complicating the picture is the very nature of AI models, which are often characterized by complex algorithms and opaque decision-making processes. This complexity makes it difficult to identify, and therefore correct, biases inherent in the systems.
Ethical Data Management to Reduce Bias in AI
Adopting good data management practices is essential to address these issues. The first step is to ensure that the datasets used for training are diverse and representative. This means actively seeking data that includes a wide variety of demographic, cultural, and social contexts, so as to avoid AI exclusively reproducing existing and potentially biased models.
Alongside data diversification, it is equally important to test models on different demographic groups. Only in this way can latent biases that would otherwise remain invisible be highlighted. Furthermore, promoting transparency in algorithms and decision-making processes is crucial. Transparency allows for critical control and makes all actors involved in the design and use of AI accountable.
Strategies for ethical and responsible artificial intelligence
Building ethical AI is not an isolated action, but an ongoing journey that requires constant attention and updating. This commitment is divided into several fundamental steps. First, ethical guidelines must be defined. Organizations must clearly establish the ethical standards to follow in the development and use of AI, inspired by fundamental values such as fairness, responsibility and transparency. These principles serve as a compass to guide all projects.
It is also essential to include a plurality of perspectives in the development of AI. Multidisciplinary teams, composed of technologists, ethicists, sociologists and representatives of the potentially involved communities, can help prevent and correct biases thanks to the variety of approaches. Last but not least, promote an ethical culture : in addition to establishing rules and composing diverse teams, it is essential to cultivate a corporate culture that places ethics at the center of every project. Only by integrating these values in the DNA of the organization can we ensure that ethics is a founding element of the development of AI.
The consequences of biased artificial intelligence
Ignoring the problem of bias can have serious and unpredictable consequences, with profound impacts on different areas of our lives. From the reinforcement of social inequalities to the loss of trust in AI-based systems, the risk is to fuel skepticism and resistance towards technological innovation. AI, if distorted, can negatively influence crucial decisions in sectors such as healthcare, employment and justice. Think, for example, of loan selection algorithms that unfairly penalize certain categories, or facial recognition software that incorrectly identifies people, with possible legal consequences. These are just some of the situations in which an unethical use of AI can worsen existing inequalities.
University training and research to counter bias in AI
Universities and higher education institutions have a crucial responsibility to address bias and promote ethical practices in AI development. Ethics must certainly be integrated into educational curricula. By including ethics modules in AI and computer science courses, universities can provide new generations of developers with the tools to recognize and address bias, contributing to more equitable and inclusive design. Universities can also be protagonists through research.
Academic institutions, with their autonomy and expertise, can explore the complexities of bias in depth, developing innovative solutions for detecting and mitigating bias. Since the topic of bias is multidimensional in nature, a collaborative approach is needed, thus fostering interdisciplinary collaboration. Universities can create spaces where computer scientists, ethicists, lawyers, and social scientists work together, offering more comprehensive and innovative solutions.
But that’s not all. As places of critical thinking and debate, universities can foster dialogue between developers, policy makers, and citizens through events, workshops, and conferences. This engagement is essential to raise awareness and promote responsible use of AI.
In this direction, several universities have already activated degree courses in artificial intelligence that combine advanced technical skills (in areas such as machine learning, computer vision and natural language processing) with training that is attentive to ethical and human implications.
Academic Opportunities for an Equitable AI Future
More and more universities around the world – including Yale and Oxford – are also creating research departments dedicated to AI and ethics.
The path to ethical AI is complex, but it also represents an opportunity to build a future where technology truly serves the common good.
By recognizing the root causes of bias , adopting responsible data practices, and engaging in ongoing and vigilant development, we can reduce the unintended effects of biased algorithms. In this process, academic institutions – thanks to their expertise and authority – are at the forefront, helping to shape a more equitable and inclusive digital age.
Source:
- TechFinancials, published on May 16th, 2025
By Zorina Alliata
Artificial intelligence (AI) used to be the stuff of science fiction. Stories about rogue machines and robot uprisings were once a source of amusement, not anxiety. But over recent years, AI has quietly embedded itself in our daily lives.
From the algorithms behind social media feeds, to the voice assistants managing our calendars. This quiet takeover has become something far louder: fear.
Headlines around AI are often alarmist. Statements such as “AI will take your job”, “AI will end education”, or “AI is dangerous and unregulated” are thrown around regularly. These narratives feed on uncertainty and fuel distrust.
But it doesn’t have to be this way. The hyper-fixation on the never-ending negative aspects of AI is the wrong approach to take. What if AI isn’t the villain? What if, at this stage, it’s simply a child?
AI, in many ways, is still learning. It mimics human behaviour, absorbs language, and forms patterns based on what it sees. Its current capabilities, however powerful they may seem, are not equivalent to human intelligence. It has limitations. It makes mistakes. It can even be manipulated and misled. It reflects our world; flaws and all. In that sense, AI is less an omnipotent force and more in line with a toddler trying to find its way.
And, like any child, it needs guidance.
This is especially evident in education. The emergence of AI tools such as ChatGPT has caused a stir in higher education institutions and universities, sparking fears about plagiarism and the erosion of critical thinking. Some institutions have responded with strict bans, while others have embraced cautious integration. The panic is understandable, but is it misplaced?
Rather than jumping to conclusions, educators should consider shifting the conversation. AI can, in fact, become an ally in learning. Instead of assuming students will cheat, we can teach them to use AI responsibly. Most of us educators can already recognise the signs of AI-generated work: excessive use of numbered lists, repetitive language and poor comparison skills. So why not use this as a teaching opportunity?
Encouraging students to engage with AI critically, understanding what it’s good at, where it falls short, and how to improve its output, can strengthen their own analytical skills. It invites them to become more active participants in their learning, not passive consumers of machine generated answers. Teaching young people how to work with AI is arguably more important than shielding them from it.
Outside the classroom, AI’s impact on the workforce is another growing concern. Stories about AI replacing jobs often dominate the news cycle. But these conversations often ignore a key point: AI is not autonomous. AI needs human designers, engineers, analysts, and ethicists to guide it. For every job that AI may eliminate, others will emerge to support and direct it.
More importantly, there are many things AI simply cannot do. It doesn’t understand nuance, morality or emotion. It can’t make ethical decisions without human input. These aren’t minor gaps, they’re fundamental. That’s why we must stop imagining AI as an unstoppable force and start thinking about how to raise it responsibly.
When considering how to raise our AI child responsibly, we need to acknowledge the issue of the algorithm being biased. Critics often point out that AI reproduces prejudices and errors, and whilst this is true, the source of that bias is us. It is important to remember that AI learns from historical data created by us, much of which reflects deeply ingrained societal inequalities.
Take, for example, mortgage lending in the US, where decades of discriminatory practices have skewed the data. Unless we intervene, AI trained on this information will inevitably reflect those same biases.
That’s not a reason to reject AI. It’s a reason to be more involved in its development, like any good parent. The responsibility lies with us.
Parenting is not about control for control’s sake; it’s about nurturing growth while setting boundaries. AI, like a child, needs feedback, accountability, and care. It will grow, but how it grows is up to us.
It’s tempting to view technology as something that happens to us, rather than something we can shape. But AI doesn’t exist outside of society, it’s a product of our values, decisions, and input. If we treat it as a monster, it may become one. If we treat it as a mirror, it will reflect what we show it. And if we treat it as a child, we may be able to raise it into something better.
So instead of fearmongering, let’s ask ourselves a better question: Are we raising AI correctly?
- Zorina Alliata, Professor of Responsible AI at OPIT– Open Institute of Technology
Read the full article below:
Source:
- Wired, published on May 01st, 2025
People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.
By Kate O’Flaherty
At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.
All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.
The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.
Hidden Data
The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”
OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”
It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.
This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.
OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.
Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.
Uncanny Likeness
In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.
However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.
Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.
The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”
OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.
Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.
ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data by default, according to the company.
Read the full article below:
Source:
- LADBible and Yahoo News, published on May 01st, 2025
You’ve probably seen them all over Instagram
By James Moorhouse
Experts have warned against participating in a viral social media trend which sees people use ChatGPT to create an action figure version of themselves.
If you’ve spent any time whatsoever doomscrolling on Instagram or TikTok or dare I say it, LinkedIn recently, you’ll be all too aware of the viral trend.
Obviously, there’s nothing more entertaining and frivolous than seeing AI generated versions of your co-workers and their cute little laptops and piña coladas, but it turns out that it might not be the best idea to take part.
There may well be some benefits to artificial intelligence but often it can produce some pretty disturbing results. Earlier this year, a lad from Norway sued ChatGPT after it falsely claimed he had been convicted of killing two of his kids.
Unfortunately, if you don’t like AI, then you’re going to have to accept that it’s going to become a regular part of our lives. You only need to look at WhatsApp or Facebook messenger to realise that. But it’s always worth saying please and thank you to ChatGPT just in case society does collapse and the AI robots take over, in the hope that they treat you mercifully. Although it might cost them a little more electricity.
Anyway, in case you’re thinking of getting involved in this latest AI trend and sharing your face and your favourite hobbies with a high tech robot, maybe don’t. You don’t want to end up starring in your own Netflix series, à la Black Mirror.
Tom Vazdar, area chair for cybersecurity at Open Institute of Technology, spoke with Wired about some of the dangers of sharing personal details about yourself with AI.
Every time you upload an image to ChatGPT, you’re potentially handing over ‘an entire bundle of metadata’ he revealed.
Vazdar added: “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.
“Because platforms like ChatGPT operate conversationally, there’s also behavioural data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”
Essentially, if you upload a photo of your face, you’re not just giving AI access to your face, but also the whatever is in the background, such as the location or other people that might feature.
Vazdar concluded: “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
While we’re at it, maybe stop using ChatGPT for your university essays and general basic questions you can find the answer to on Google as well. The last thing you need is AI knowing you don’t know how to do something basic if it does takeover the world.
Read the full article below:
- LADBible and Yahoo News
The world is rapidly changing. New technologies such as artificial intelligence (AI) are transforming our lives and work, redefining the definition of “essential office skills.”
So what essential skills do today’s workers need to thrive in a business world undergoing a major digital transformation? It’s a question that Alan Lerner, director at Toptal and lecturer at the Open Institute of Technology (OPIT), addressed in his recent online masterclass.
In a broad overview of the new office landscape, Lerner shares the essential skills leaders need to manage – including artificial intelligence – to keep abreast of trends.
Here are eight essential capabilities business leaders in the AI era need, according to Lerner, which he also detailed in OPIT’s recent Master’s in Digital Business and Innovation webinar.
An Adapting Professional Environment
Lerner started his discussion by quoting naturalist Charles Darwin.
“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.”
The quote serves to highlight the level of change that we are currently seeing in the professional world, said Lerner.
According to the World Economic Forum’s The Future of Jobs Report 2025, over the next five years 22% of the labor market will be affected by structural change – including job creation and destruction – and much of that change will be enabled by new technologies such as AI and robotics. They expect the displacement of 92 million existing jobs and the creation of 170 million new jobs by 2030.
While there will be significant growth in frontline jobs – such as delivery drivers, construction workers, and care workers – the fastest-growing jobs will be tech-related roles, including big data specialists, FinTech engineers, and AI and machine learning specialists, while the greatest decline will be in clerical and secretarial roles. The report also predicts that most workers can anticipate that 39% of their existing skill set will be transformed or outdated in five years.
Lerner also highlighted key findings in the Accenture Life Trends 2025 Report, which explores behaviors and attitudes related to business, technology, and social shifts. The report noted five key trends:
- Cost of Hesitation – People are becoming more wary of the information they receive online.
- The Parent Trap – Parents and governments are increasingly concerned with helping the younger generation shape a safe relationship with digital technology.
- Impatience Economy – People are looking for quick solutions over traditional methods to achieve their health and financial goals.
- The Dignity of Work – Employees desire to feel inspired, to be entrusted with agency, and to achieve a work-life balance.
- Social Rewilding – People seek to disconnect and focus on satisfying activities and meaningful interactions.
These are consumer and employee demands representing opportunities for change in the modern business landscape.
Key Capabilities for the AI Era
Businesses are using a variety of strategies to adapt, though not always strategically. According to McClean & Company’s HR Trends Report 2025, 42% of respondents said they are currently implementing AI solutions, but only 7% have a documented AI implementation strategy.
This approach reflects the newness of the technology, with many still unsure of the best way to leverage AI, but also feeling the pressure to adopt and adapt, experiment, and fail forward.
So, what skills do leaders need to lead in an environment with both transformation and uncertainty? Lerner highlighted eight essential capabilities, independent of technology.
Capability 1: Manage Complexity
Leaders need to be able to solve problems and make decisions under fast-changing conditions. This requires:
- Being able to look at and understand organizations as complex social-technical systems
- Keeping a continuous eye on change and adopting an “outside-in” vision of their organization
- Moving fast and fixing things faster
- Embracing digital literacy and technological capabilities
Capability 2: Leverage Networks
Leaders need to develop networks systematically to achieve organizational goals because it is no longer possible to work within silos. Leaders should:
- Use networks to gain insights into complex problems
- Create networks to enhance influence
- Treat networks as mutually rewarding relationships
- Develop a robust profile that can be adapted for different networks
Capability 3: Think and Act “Global”
Leaders should benchmark using global best practices but adapt them to local challenges and the needs of their organization. This requires:
- Identifying what great companies are achieving and seeking data to understand underlying patterns
- Developing perspectives to craft global strategies that incorporate regional and local tactics
- Learning how to navigate culturally complex and nuanced business solutions
Capability 4: Inspire Engagement
Leaders must foster a culture that creates meaningful connections between employees and organizational values. This means:
- Understanding individual values and needs
- Shaping projects and assignments to meet different values and needs
- Fostering an inclusive work environment with plenty of psychological safety
- Developing meaningful conversations and both providing and receiving feedback
- Sharing advice and asking for help when needed
Capability 5: Communicate Strategically
Leaders should develop crisp, clear messaging adaptable to various audiences and focus on active listening. Achieving this involves:
- Creating their communication style and finding their unique voice
- Developing storytelling skills
- Utilizing a data-centric and fact-based approach to communication
- Continual practice and asking for feedback
Capability 6: Foster Innovation
Leaders should collaborate with experts to build a reliable innovation process and a creative environment where new ideas thrive. Essential steps include:
- Developing or enhancing structures that best support innovation
- Documenting and refreshing innovation systems, processes, and practices
- Encouraging people to discover new ways of working
- Aiming to think outside the box and develop a growth mindset
- Trying to be as “tech-savvy” as possible
Capability 7: Cultivate Learning Agility
Leaders should always seek out and learn new things and not be afraid to ask questions. This involves:
- Adopting a lifelong learning mindset
- Seeking opportunities to discover new approaches and skills
- Enhancing problem-solving skills
- Reviewing both successful and unsuccessful case studies
Capability 8: Develop Personal Adaptability
Leaders should be focused on being effective when facing uncertainty and adapting to change with vigor. Therefore, leaders should:
- Be flexible about their approach to facing challenging situations
- Build resilience by effectively managing stress, time, and energy
- Recognize when past approaches do not work in current situations
- Learn from and capitalize on mistakes
Curiosity and Adaptability
With the eight key capabilities in mind, Lerner suggests that curiosity and adaptability are the key skills that everyone needs to thrive in the current environment.
He also advocates for lifelong learning and teaches several key courses at OPIT which can lead to a Bachelor’s Degree in Digital Business.
Source:
- Il Sole 24 Ore, published on April 14th, 2025
Expert Pierluigi Casale analyzes the adoption of AI by companies, the ethical and regulatory challenges and the differentiated approach between large companies and SMEs
By Gianni Rusconi
Easier said than done: to paraphrase the well-known proverb, and to place it in the increasingly large collection of critical issues and opportunities related to artificial intelligence, the task that CEOs and management have to adequately integrate this technology into the company is indeed difficult. Pierluigi Casale, professor at OPIT (Open Institute of Technology, an academic institution founded two years ago and specialized in the field of Computer Science) and technical consultant to the European Parliament for the implementation and regulation of AI, is among those who contributed to the definition of the AI Act, providing advice on aspects of safety and civil liability. His task, in short, is to ensure that the adoption of artificial intelligence (primarily within the parliamentary committees operating in Brussels) is not only efficient, but also ethical and compliant with regulations. And, obviously, his is not an easy task.
The experience gained over the last 15 years in the field of machine learning and the role played in organizations such as Europol and in leading technology companies are the requirements that Casale brings to the table to balance the needs of EU bodies with the pressure exerted by American Big Tech and to preserve an independent approach to the regulation of artificial intelligence. A technology, it is worth remembering, that implies broad and diversified knowledge, ranging from the regulatory/application spectrum to geopolitical issues, from computational limitations (common to European companies and public institutions) to the challenges related to training large-format language models.
CEOs and AI
When we specifically asked how CEOs and C-suites are “digesting” AI in terms of ethics, safety and responsibility, Casale did not shy away, framing the topic based on his own professional career. “I have noticed two trends in particular: the first concerns companies that started using artificial intelligence before the AI Act and that today have the need, as well as the obligation, to adapt to the new ethical framework to be compliant and avoid sanctions; the second concerns companies, like the Italian ones, that are only now approaching this topic, often in terms of experimental and incomplete projects (the expression used literally is “proof of concept”, ed.) and without these having produced value. In this case, the ethical and regulatory component is integrated into the adoption process.”
In general, according to Casale, there is still a lot to do even from a purely regulatory perspective, due to the fact that there is not a total coherence of vision among the different countries and there is not the same speed in implementing the indications. Spain, in this regard, is setting an example, having established (with a royal decree of 8 November 2023) a dedicated “sandbox”, i.e. a regulatory experimentation space for artificial intelligence through the creation of a controlled test environment in the development and pre-marketing phase of some artificial intelligence systems, in order to verify compliance with the requirements and obligations set out in the AI Act and to guide companies towards a path of regulated adoption of the technology.
Read the full article below (in Italian):
There is no question that the spread of artificial intelligence (AI) is having a profound impact on nearly every aspect of our lives.
But is an AI-powered future one to be feared, or does AI offer the promise of a “lucky future.”
That “lucky future” prediction comes from Zorina Alliata, principal AI Strategist at Amazon and AI faculty member at Georgetown University and the Open Institute of Technology (OPIT), in her recent webinar “The Lucky Future: How AI Aims to Change Everything” (February 18, 2025).
However, according to Alliata, such a future depends on how the technology develops and whether strategies can be implemented to mitigate the risks.
How AI Aims to Change Everything
For many people, AI is already changing the way they work. However, more broadly, AI has profoundly impacted how we consume information.
From the curation of a social media feed and the summary answer to a search query from Gemini at the top of your Google results page to the AI-powered chatbot that resolves your customer service issues, AI has quickly and quietly infiltrated nearly every aspect of our lives in the past few years.
While there have been significant concerns recently about the possibly negative impact of AI, Alliata’s “lucky future” prediction takes these fears into account. As she detailed in her webinar, a future with AI will have to take into consideration:
- Where we are currently with AI and future trajectories
- The impact AI is having on the job landscape
- Sustainability concerns and ethical dilemmas
- The fundamental risks associated with current AI technology
According to Alliata, by addressing these risks, we can craft a future in which AI helps individuals better align their needs with potential opportunities and limitations of the new technology.
Industry Applications of AI
While AI has been in development for decades, Alliata describes a period known as the “AI winter” during which educators like herself studied AI technology, but hadn’t arrived at a point of practical applications. Contributing to this period of uncertainty were concerns over how to make AI profitable as well.
That all changed about 10-15 years ago when machine learning (ML) improved significantly. This development led to a surge in the creation of business applications for AI. Beginning with automation and robotics for repetitive tasks, the technology progressed to data analysis – taking a deep dive into data and finding not only new information but new opportunities as well.
This further developed into generative AI capable of completing creative tasks. Generative AI now produces around one billion words per day, compared to the one trillion produced by humans.
We are now at the stage where AI can complete complex tasks involving multiple steps. In her webinar, Alliata gave the example of a team creating storyboards and user pathways for a new app they wanted to develop. Using photos and rough images, they were able to use AI to generate the code for the app, saving hundreds of hours of manpower.
The next step in AI evolution is Artificial General Intelligence (AGI), an extremely autonomous level of AI that can replicate or in some cases exceed human intelligence. While the benefits of such technology may readily be obvious to some, the industry itself is divided as to not only whether this form of AI is close at hand or simply unachievable with current tools and technology, but also whether it should be developed at all.
This unpredictability, according to Alliata, represents both the excitement and the concerns about AI.
The AI Revolution and the Job Market
According to Alliata, the job market is the next area where the AI revolution can profoundly impact our lives.
To date, the AI revolution has not resulted in widespread layoffs as initially feared. Instead of making employees redundant, many jobs have evolved to allow them to work alongside AI. In fact, AI has also created new jobs such as AI prompt writer.
However, the prediction is that as AI becomes more sophisticated, it will need less human support, resulting in a greater job churn. Alliata shared statistics from various studies predicting as many as 27% of all jobs being at high risk of becoming redundant from AI and 40% of working hours being impacted by language learning models (LLMs) like Chat GPT.
Furthermore, AI may impact some roles and industries more than others. For example, one study suggests that in high-income countries, 8.5% of jobs held by women were likely to be impacted by potential automation, compared to just 3.9% of jobs held by men.
Is AI Sustainable?
While Alliata shared the many ways in which AI can potentially save businesses time and money, she also highlighted that it is an expensive technology in terms of sustainability.
Conducting AI training and processing puts a heavy strain on central processing units (CPUs), requiring a great deal of energy. According to estimates, Chat GPT 3 alone uses as much electricity per day as 121 U.S. households in an entire year. Gartner predicts that by 2030, AI could consume 3.5% of the world’s electricity.
To reduce the energy requirements, Alliata highlighted potential paths forward in terms of hardware optimization, such as more energy-efficient chips, greater use of renewable energy sources, and algorithm optimization. For example, models that can be applied to a variety of uses based on prompt engineering and parameter-efficient tuning are more energy-efficient than training models from scratch.
Risks of Using Generative AI
While Alliata is clearly an advocate for the benefits of AI, she also highlighted the risks associated with using generative AI, particularly LLMs.
- Uncertainty – While we rely on AI for answers, we aren’t always sure that the answers provided are accurate.
- Hallucinations – Technology designed to answer questions can make up facts when it does not know the answer.
- Copyright – The training of LLMs often uses copyrighted data for training without permission from the creator.
- Bias – Biased data often trains LLMs, and that bias becomes part of the LLM’s programming and production.
- Vulnerability – Users can bypass the original functionality of an LLM and use it for a different purpose.
- Ethical Risks – AI applications pose significant ethical risks, including the creation of deepfakes, the erosion of human creativity, and the aforementioned risks of unemployment.
Mitigating these risks relies on pillars of responsibility for using AI, including value alignment of the application, accountability, transparency, and explainability.
The last one, according to Alliata, is vital on a human level. Imagine you work for a bank using AI to assess loan applications. If a loan is denied, the explanation you give to the customer can’t simply be “Because the AI said so.” There needs to be firm and explainable data behind the reasoning.
OPIT’s Masters in Responsible Artificial Intelligence explores the risks and responsibilities inherent in AI, as well as others.
A Lucky Future
Despite the potential risks, Alliata concludes that AI presents even more opportunities and solutions in the future.
Information overload and decision fatigue are major challenges today. Imagine you want to buy a new car. You have a dozen features you desire, alongside hundreds of options, as well as thousands of websites containing the relevant information. AI can help you cut through the noise and narrow the information down to what you need based on your specific requirements.
Alliata also shared how AI is changing healthcare, allowing patients to understand their health data, make informed choices, and find healthcare professionals who meet their needs.
It is this functionality that can lead to the “lucky future.” Personalized guidance based on an analysis of vast amounts of data means that each person is more likely to make the right decision with the right information at the right time.
Source:
- Agenda Digitale, published on March 28th, 2025
By Zorina Alliata, Professor of Responsible Artificial Intelligence e Digital Business & Innovation at OPIT – Open Institute of Technology
Integrating generative AI into your business means innovating, but also managing risks. Here’s how to choose the right approach to get value
The adoption of generative AI in the enterprise is growing rapidly, bringing innovation to decision-making, creativity and operations. However, to fully exploit its potential, it is essential to define clear objectives and adopt strategies that balance benefits and risks.
Over the course of my career, I have been fortunate to experience firsthand some major technological revolutions – from the internet boom to the “renaissance” of artificial intelligence a decade ago with machine learning.
However, I have never seen such a rapid rate of adoption as the one we are experiencing now, thanks to generative AI. Although this type of AI is not yet perfect and presents significant risks – such as so-called “hallucinations” or the possibility of generating toxic content – it fills a real need, both for people and for companies, generating a concrete impact on communication, creativity and decision-making processes.
Defining the Goals of Generative AI in the Enterprise
When we talk about AI, we must first ask ourselves what problems we really want to solve. As a teacher and consultant, I have always supported the importance of starting from the specific context of a company and its concrete objectives, without inventing solutions that are as “smart” as they are useless.
AI is a formidable tool to support different processes: from decision-making to optimizing operations or developing more accurate predictive analyses. But to have a significant impact on the business, you need to choose carefully which task to entrust it with, making sure that the solution also respects the security and privacy needs of your customers .
Understanding Generative AI to Adopt It Effectively
A widespread risk, in fact, is that of being guided by enthusiasm and deploying sophisticated technology where it is not really needed. For example, designing a system of reviews and recommendations for films requires a certain level of attention and consumer protection, but it is very different from an X-ray reading service to diagnose the presence of a tumor. In the second case, there is a huge ethical and medical risk at stake: it is necessary to adapt the design, control measures and governance of the AI to the sensitivity of the context in which it will be used.
The fact that generative AI is spreading so rapidly is a sign of its potential and, at the same time, a call for caution. This technology manages to amaze anyone who tries it: it drafts documents in a few seconds, summarizes or explains complex concepts, manages the processing of extremely complex data. It turns into a trusted assistant that, on the one hand, saves hours of work and, on the other, fosters creativity with unexpected suggestions or solutions.
Yet, it should not be forgotten that these systems can generate “hallucinated” content (i.e., completely incorrect), or show bias or linguistic toxicity where the starting data is not sufficient or adequately “clean”. Furthermore, working with AI models at scale is not at all trivial: many start-ups and entrepreneurs initially try a successful idea, but struggle to implement it on an infrastructure capable of supporting real workloads, with adequate governance measures and risk management strategies. It is crucial to adopt consolidated best practices, structure competent teams, define a solid operating model and a continuous maintenance plan for the system.
The Role of Generative AI in Supporting Business Decisions
One aspect that I find particularly interesting is the support that AI offers to business decisions. Algorithms can analyze a huge amount of data, simulating multiple scenarios and identifying patterns that are elusive to the human eye. This allows to mitigate biases and distortions – typical of exclusively human decision-making processes – and to predict risks and opportunities with greater objectivity.
At the same time, I believe that human intuition must remain key: data and numerical projections offer a starting point, but context, ethics and sensitivity towards collaborators and society remain elements of human relevance. The right balance between algorithmic analysis and strategic vision is the cornerstone of a responsible adoption of AI.
Industries Where Generative AI Is Transforming Business
As a professor of Responsible Artificial Intelligence and Digital Business & Innovation, I often see how some sectors are adopting AI extremely quickly. Many industries are already transforming rapidly. The financial sector, for example, has always been a pioneer in adopting new technologies: risk analysis, fraud prevention, algorithmic trading, and complex document management are areas where generative AI is proving to be very effective.
Healthcare and life sciences are taking advantage of AI advances in drug discovery, advanced diagnostics, and the analysis of large amounts of clinical data. Sectors such as retail, logistics, and education are also adopting AI to improve their processes and offer more personalized experiences. In light of this, I would say that no industry will be completely excluded from the changes: even “humanistic” professions, such as those related to medical care or psychological counseling, will be able to benefit from it as support, without AI completely replacing the relational and care component.
Integrating Generative AI into the Enterprise: Best Practices and Risk Management
A growing trend is the creation of specialized AI services AI-as-a-Service. These are based on large language models but are tailored to specific functionalities (writing, code checking, multimedia content production, research support, etc.). I personally use various AI-as-a-Service tools every day, deriving benefits from them for both teaching and research. I find this model particularly advantageous for small and medium-sized businesses, which can thus adopt AI solutions without having to invest heavily in infrastructure and specialized talent that are difficult to find.
Of course, adopting AI technologies requires companies to adopt a well-structured risk management strategy, covering key areas such as data protection, fairness and lack of bias in algorithms, transparency towards customers, protection of workers, definition of clear responsibilities regarding automated decisions and, last but not least, attention to environmental impact. Each AI model, especially if trained on huge amounts of data, can require significant energy consumption.
Furthermore, when we talk about generative AI and conversational models , we add concerns about possible inappropriate or harmful responses (so-called “hallucinations”), which must be managed by implementing filters, quality control and continuous monitoring processes. In other words, although AI can have disruptive and positive effects, the ultimate responsibility remains with humans and the companies that use it.
Read the full article below (in Italian):
Have questions?
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We are international
We can speak in: