Read the full article below (in Italian):


Source:
- Sheerluxe, Published on January 29th, 2025.
What’s the most important thing business leaders or entrepreneurs need to be aware of?
“Leaders need to accept and understand what AI technology can do. I have lived through the internet boom and the initial AI comeback a decade ago in the form of machine learning. Both of these were waves of change in the IT industry that affected every aspect of our society and our lives. But I’ve never seen such a high speed of adoption as with generative AI. Even though the technology is young and not perfect, it is obvious that it fills a real need for most of us, individuals as well as businesses. Therefore, leaders must educate themselves in AI to learn the truth about its capabilities and risks. Use AI to solve a problem; do not invent a clever solution to a problem no one has. Be aware of the new risks that generative AI introduces, like hallucinations and toxicity, and allow use of AI accordingly for your own customers.” – Zorina Alliata, professor of responsible artificial intelligence, digital business & innovation at OPIT
Which industries do you predict will be most disrupted by AI in the next couple of years?
“The financial industry is always one of the first to adopt new technologies. Financial companies are already using generative AI for document processing, risk assessment, fraud prevention and algorithmic trading. Because of increased computing power, we also see AI growth in healthcare and life sciences for drug discovery and enhanced diagnostic procedures. Retail, education, logistics are also adopting AI at a high pace. Which industries will remain unaffected? None, really. Even in high-touch human professions like nursing, therapy, parenting, AI is a tool that can help. While not replacing the job entirely, the industry will change because the AI tools are changing the way the job is done.” – Zorina
Are there any new business models emerging due to AI advancements?
“I think we will see more AI-as-a-service (AIaaS) offerings, where AI tools are built on top of large language models and offer specific capabilities. This is an area where there is a lot of innovation, and I’m excited to see this develop further. I already use AIaaS on a daily basis for better writing, research, creating videos and presentations, and code debugging.” – Zorina
What are the biggest challenges for small businesses and start-ups in adopting AI technologies?
“A big risk is too much enthusiasm and optimism. Generative AI has been adopted at a great speed. When you first try it, it is amazing. It can write a whole paper in seconds. It can explain complex diagrams and concepts. It feels like the trusted assistant you always needed, but it’s important to remember that AI comes with risks. It’s one thing to write an AI service that recommends what movie you should watch next, and another thing to write an AI service that reads your X-ray and diagnoses if you have a tumour. These two applications of AI have very different risk thresholds. You need to plan your AI service or product to be appropriate for use and to minimise the risk for your customer. I’ve also seen start-ups that tried out an idea and are now planning to build a product out of it, without any understanding of what it takes to run AI services at scale. Having best practices implemented, a good operational foundation, governance and a clear operational model are all requisites for running any production systems, especially something as risky and fraught with unknowns as AI products are.” – Zorina
Which ethical considerations should entrepreneurs keep in mind when integrating AI into their businesses?
“Some considerations when creating your risk strategy for AI include data privacy and security (ensuring responsible collection and use of customer data); transparency (being clear about how AI is used in products or services); fairness and bias (addressing potential biases in AI algorithms); job displacement (considering the impact on employees and planning for transitions); accountability (establishing clear responsibility for AI-driven decisions); and environmental impact (considering the energy consumption of AI systems).” – Zorina
How is AI changing customer expectations?
“Customer expectations have gone up significantly since generative AI enabled better interactions. Customers expect omni-channel communications, immediate responses, and predictive service. For those companies that still have fragmented data in several platforms and lack a cohesive customer journey, the learning curve will be steeper. The good news is, there are a lot of innovations in this area.” – Zorina
What skills do you think entrepreneurs will need to succeed in an AI-dominated business world?
“Some skills that would be useful include:
- AI literacy: understanding the basics of AI, machine learning and data science.
- Data analysis & interpretation: ability to work with and derive insights from large datasets.
- Strategic thinking: identifying where AI can add value to business processes and products.
- Ethical decision-making: navigating the ethical implications of AI implementation.
- Adaptability & continuous learning: keeping up with rapidly evolving AI technologies.
- Human-AI collaboration: effectively working alongside AI systems.
- Soft skills: creativity, critical thinking, emotional intelligence and leadership will become even more valuable as AI handles more routine tasks.
As a leader, you are not required to write code or figure out the best way to deploy your model, but a high-level understanding of what AI can do will help you have meaningful conversations with your technical team and create AI products that are truly useful.” – Zorina
Finally, how will AI impact the workforce this year?
“There are several studies on this, such as the one the World Economic Forum (WEF) released this month about the status of work and the future of jobs. Some of the highlights are that AI and other technologies will continue to broaden digital access, with a first effect on increased demand for AI and data skills. The number of technology-related roles is the fastest growing, but frontline roles like farmworkers, delivery drivers and construction workers are predicted to see the largest growth. AI has evolved quickly to create images and videos, threatening the jobs of designers and movie producers. It was not what we would have predicted a few years ago. AI has a way of growing in unexpected ways, as we discover new paths of research and innovate ways to use it. I personally think it is hard to predict exactly where AI will go, and what will be the result of automating all routine tasks and behaving closer to humans. One thing we can be sure of is that people who understand AI and know how to use it will benefit from whatever new challenges are coming our way.” – Zorina
Read the full article below:
Related posts

Source:
- Agenda Digitale, published on June 16th, 2025
By Lokesh Vij, Professor of Cloud Computing Infrastructure, Cloud Development, Cloud Computing Automation and Ops and Cloud Data Stacks at OPIT – Open Institute of Technology
NIST identifies five key characteristics of cloud computing: on-demand self-service, network access, resource pooling, elasticity, and metered service. These pillars explain the success of the global cloud market of 912 billion in 2025
Read the full article below (in Italian):

You’ve probably seen two of the most recent popular social media trends. The first is creating and posting your personalized action figure version of yourself, complete with personalized accessories, from a yoga mat to your favorite musical instrument. There is also the Studio Ghibli trend, which creates an image of you in the style of a character from one of the animation studio’s popular films.
Both of these are possible thanks to OpenAI’s GPT-4o-powered image generator. But what are you risking when you upload a picture to generate this kind of content? More than you might imagine, according to Tom Vazdar, chair of cybersecurity at the Open Institute of Technology (OPIT), in a recent interview with Wired. Let’s take a closer look at the risks and how this issue ties into the issue of responsible artificial intelligence.
Uploading Your Image
To get a personalized image of yourself back from ChatGPT, you need to upload an actual photo, or potentially multiple images, and tell ChatGPT what you want. But in addition to using your image to generate content for you, OpenAI could also be using your willingly submitted image to help train its AI model. Vazdar, who is also CEO and AI & Cybersecurity Strategist at Riskoria and a board member for the Croatian AI Association, says that this kind of content is “a gold mine for training generative models,” but you have limited power over how that image is integrated into their training strategy.
Plus, you are uploading much more than just an image of yourself. Vazdar reminds us that we are handing over “an entire bundle of metadata.” This includes the EXIF data attached to the image, such as exactly when and where the photo was taken. And your photo may have more content in it than you imagine, with the background – including people, landmarks, and objects – also able to be tied to that time and place.
In addition to this, OpenAI also collects data about the device that you are using to engage with the platform, and, according to Vazdar, “There’s also behavioral data, such as what you typed, what kind of image you asked for, how you interacted with the interface and the frequency of those actions.”
After all that, OpenAI knows a lot about you, and soon, so could their AI model, because it is studying you.
How OpenAI Uses Your Data
OpenAI claims that they did not orchestrate these social media trends simply to get training data for their AI, and that’s almost certainly true. But they also aren’t denying that access to that freely uploaded data is a bonus. As Vazdar points out, “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
OpenAI isn’t the only company using your data to train its AI. Meta recently updated its privacy policy to allow the company to use your personal information on Meta-related services, such as Facebook, Instagram, and WhatsApp, to train its AI. While it is possible to opt-out, Meta isn’t advertising that fact or making it easy, which means that most users are sharing their data by default.
You can also control what happens with your data when using ChatGPT. Again, while not well publicized, you can use ChatGPT’s self-service tools to access, export, and delete your personal information, and opt out of having your content used to improve OpenAI’s model. Nevertheless, even if you choose these options, it is still worth it to strip data like location and time from images before uploading them and to consider the privacy of any images, including people and objects in the background, before sharing.
Are Data Protection Laws Keeping Up?
OpenAI and Meta need to provide these kinds of opt-outs due to data protection laws, such as GDPR in the EU and the UK. GDPR gives you the right to access or delete your data, and the use of biometric data requires your explicit consent. However, your photo only becomes biometric data when it is processed using a specific technical measure that allows for the unique identification of an individual.
But just because ChatGPT is not using this technology, doesn’t mean that ChatGPT can’t learn a lot about you from your images.
AI and Ethics Concerns
But you might wonder, “Isn’t it a good thing that AI is being trained using a diverse range of photos?” After all, there have been widespread reports in the past of AI struggling to recognize black faces because they have been trained mostly on white faces. Similarly, there have been reports of bias within AI due to the information it receives. Doesn’t sharing from a wide range of users help combat that? Yes, but there is so much more that could be done with that data without your knowledge or consent.
One of the biggest risks is that the data can be manipulated for marketing purposes, not just to get you to buy products, but also potentially to manipulate behavior. Take, for instance, the Cambridge Analytica scandal, which saw AI used to manipulate voters and the proliferation of deepfakes sharing false news.
Vazdar believes that AI should be used to promote human freedom and autonomy, not threaten it. It should be something that benefits humanity in the broadest possible sense, and not just those with the power to develop and profit from AI.
Responsible Artificial Intelligence
OPIT’s Master’s in Responsible AI combines technical expertise with a focus on the ethical implications of AI, diving into questions such as this one. Focusing on real-world applications, the course considers sustainable AI, environmental impact, ethical considerations, and social responsibility.
Completed over three or four 13-week terms, it starts with a foundation in technical artificial intelligence and then moves on to advanced AI applications. Students finish with a Capstone project, which sees them apply what they have learned to real-world problems.
Have questions?
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We are international
We can speak in: