Read the full article below:
Search inside The Magazine

During the Open Institute of Technology’s (OPIT’s) 2025 Graduation Day, we conducted interviews with many recent graduates to understand why they chose OPIT, how they felt about the course, and what advice they might give to others considering studying at OPIT.
Karina is an experienced FinTech professional who is an experienced integration manager, ERP specialist, and business analyst. She was interested in learning AI applications to expand her career possibilities, and she chose OPIT’s MSc in Applied Data Science & AI.
In the interview, Karina discussed why she chose OPIT over other courses of study, the main challenges she faced when completing the course while working full-time, and the kind of support she received from OPIT and other students.
Why Study at OPIT?
Karina explained that she was interested in enhancing her AI skills to take advantage of a major emerging technology in the FinTech field. She said that she was looking for a course that was affordable and that she could manage alongside her current demanding job. Karina noted that she did not have the luxury to take time off to become a full-time student.
She was principally looking at courses in the United States and the United Kingdom. She found that comprehensive courses were expensive, costing upwards of $50,000, and did not always offer flexible study options. Meanwhile, flexible courses that she could complete while working offered excellent individual modules, but didn’t always add up to a coherent whole. This was something that set OPIT apart.
Karina admits that she was initially skeptical when she encountered OPIT because, at the time, it was still very new. OPIT only started offering courses in September 2023, so 2025 was the first cohort of graduates.
Nevertheless, Karina was interested in OPIT’s affordable study options and the flexibility of fully remote learning and part-time options. She said that when she looked into the course, she realized that it aligned very closely with what she was looking for.
In particular, Karina noted that she was always wary of further study because of the level of mathematics required in most computer science courses. She appreciated that OPIT’s course focused on understanding the underlying core principles and the potential applications, rather than the fine programming and mathematical details. This made the course more applicable to her professional life.
OPIT’s MSc in Applied Data Science & AI
The course Karina took was OPIT’s MSc in Applied Data Science & AI. It is a three- to four-term course (13 weeks), which can take between one and two years to complete, depending on the pace you choose and whether you choose the 90 or 120 ECTS option. As well as part-time, there are also regular and fast-track options.
The course is fully online and completed in English, with an accessible tuition fee of €2,250 per term, which is €6,750 for the 90 ECTS course and €9,000 for the 120 ECTS course. Payment plans are available as are scholarships, and discounts are available if you pay the full amount upfront.
It matches foundational tech modules with business application modules to build a strong foundation. It then ends with a term-long research project culminating in a thesis. Internships with industry partners are encouraged and facilitated by OPIT, or professionals can work on projects within their own companies.
Entry requirements include a bachelor’s degree or equivalency in any field, including non-tech fields, and English proficiency to a B2 level.
Faculty members include Pierluigi Casale, a former Data Science and AI Innovation Officer for the European Parliament and Principal Data Scientist at TomTom; Paco Awissi, former VP at PSL Group and an instructor at McGill University; and Marzi Bakhshandeh, a Senior Product Manager at ING.
Challenges and Support
Karina shared that her biggest challenge while studying at OPIT was time management and juggling the heavy learning schedule with her hectic job. She admitted that when balancing the two, there were times when her social life suffered, but it was doable. The key to her success was organization, time management, and the support of the rest of the cohort.
According to Karina, the cohort WhatsApp group was often a lifeline that helped keep her focused and optimistic during challenging times. Sharing challenges with others in the same boat and seeing the example of her peers often helped.
The OPIT Cohort
OPIT has a wide and varied cohort with over 300 students studying remotely from 78 countries around the world. Around 80% of OPIT’s students are already working professionals who are currently employed at top companies in a variety of industries. This includes global tech firms such as Accenture, Cisco, and Broadcom, FinTech companies like UBS, PwC, Deloitte, and the First Bank of Nigeria, and innovative startups and enterprises like Dynatrace, Leonardo, and the Pharo Foundation.
Study Methods
This cohort meets in OPIT’s online classrooms, powered by the Canvas Learning Management System (LMS). One of the world’s leading teaching and learning software, it acts as a virtual hub for all of OPIT’s academic activities, including live lectures and discussion boards. OPIT also uses the same portal to conduct continuous assessments and prepare students before final exams.
If you want to collaborate with other students, there is a collaboration tab where you can set up workrooms, and also an official Slack platform. Students tend to use WhatsApp for other informal communications.
If students need additional support, they can book an appointment with the course coordinator through Canvas to get advice on managing their workload and balancing their commitments. Students also get access to experienced career advisor Mike McCulloch, who can provide expert guidance.
A Supportive Environment
These services and resources create a supportive environment for OPIT students, which Karina says helped her throughout her course of study. Karina suggests organization and leaning into help from the community are the best ways to succeed when studying with OPIT.

In April 2025, Professor Francesco Derchi from the Open Institute of Technology (OPIT) and Chair of OPIT’s Digital Business programs entered the online classroom to talk about the current state of the Metaverse and what companies can do to engage with this technological shift. As an expert in digital marketing, he is well-placed to talk about how brands can leverage the Metaverse to further company goals.
Current State of the Metaverse
Francesco started by exploring what the Metaverse is and the rocky history of its development. Although many associate the term Metaverse with Mark Zuckerberg’s 2021 announcement of Meta’s pivot toward a virtual immersive experience co-created by users, the concept actually existed long before. In his 1992 novel Snow Crash, author Neal Stephenson described a very similar concept, with people using avatars to seamlessly step out of the real world and into a highly connected virtual world.
Zuckerberg’s announcement was not even the start of real Metaverse-like experiences. Released in 2003, Second Life is a virtual world in which multiple users come together and engage through avatars. Participation in Second Life peaked at about one million active users in 2007. Similarly, Minecraft, released in 2011, is a virtual world where users can explore and build, and it offers multiplayer options.
What set Zuckerberg’s vision apart from these earlier iterations is that he imagined a much broader virtual world, with almost limitless creation and interaction possibilities. However, this proved much more difficult in practice.
Both Meta and Microsoft started investing significantly in the Metaverse at around the same time, with Microsoft completing its acquisition of Activision Blizzard – a gaming company that creates virtual world games such as World of Warcraft – in 2023 and working with Epic Games to bring Fortnite to their Xbox cloud gaming platform.
But limited adoption of new Metaverse technology saw both Meta and Microsoft announce major layoffs and cutbacks on their Metaverse investments.
Open Garden Metaverse
One of the major issues for the big Metaverse vision is that it requires an open-garden Metaverse. Matthew Ball defined this kind of Metaverse in his 2022 book:
“A massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communication, and payments.”
This vision requires an open Metaverse, a virtual world beyond any single company’s walled garden that allows interaction across platforms. With the current technology and state of the market, this is believed to be at least 10 years away.
With that in mind, Zuckerberg and Meta have pivoted away from expanding their Metaverse towards delivering devices such as AI glasses with augmented reality capabilities and virtual reality headsets.
Nevertheless, the Metaverse is still expanding today, but within walled garden contexts. Francesco pointed to Pokémon Go and Roblox as examples of Metaverse-esque words with enormous engagement and popularity.
Brands Engaging with the Metaverse: Nike Case Study
What does that mean for brands? Should they ignore the Metaverse until it becomes a more realistic proposition, or should they be establishing their Meta presence now?
Francesco used Nike’s successful approach to Meta engagement to show how brands can leverage the Metaverse today.
He pointed out that this was a strategic move from Nike to protect their brand. As a cultural phenomenon, people will naturally bring their affinity with Nike into the virtual space with them. If Nike doesn’t constantly monitor that presence, they can lose control of it. Rather than see this as a threat, Nike identified it as an opportunity. As people engage more online, their virtual appearance can become even more important than their physical appearance. Therefore, there is a space for Nike to occupy in this virtual world as a cultural icon.
Nike chose an ad hoc approach, going to users where they are and providing experiences within popular existing platforms.
As more than 1.5 million people play Fortnite every day, Nike started there, first selling a variety of virtual shoes that users can buy to kit out their avatars.
Roblox similarly has around 380 million monthly active users, so Nike entered the space and created NIKELAND, a purpose-built virtual area that offers a unique brand experience in the virtual world. For example, during NBA All-Star Week, LeBron James visited NIKELAND, where he coached and engaged with players. During the FIFA World Cup, NIKELAND let users claim two free soccer jerseys to show support for their favorite teams. According to statistics published at the end of 2023, in less than two years, NIKELAND had more than 34.9 million visitors, with over 13.4 billion hours of engagement and $185 million in NFT (non-fungible tokens or unique digital assets) sales.
Final Thoughts
Francesco concluded by discussing that while Nike has been successful in the Metaverse, this is not necessarily a success that will be simple for smaller brands to replicate. Nike was successful in the virtual world because they are a cultural phenomenon, and the Metaverse is a combination of technology and culture.
Therefore, brands today must decide how to engage with the current state of the Metaverse and prepare for its potential future expansion. Because existing Metaverses are walled gardens, brands also need to decide which Metaverses warrant investment or whether it is worth creating their own dedicated platforms. This all comes down to an appetite for risk.
Facing these types of challenges comes down to understanding the business potential of new technologies and making decisions based on risk and opportunity. OPIT’s BSc in Digital Business and MSc in Digital Business and Innovation help develop these skills, with Francesco also serving as program chair.

Source:
- Computer Weekly, published on May 27th, 2025
By Nicholas Fearn
An AWS tech stack can aid business growth and facilitate efficient operations, but misconfigurations have become all too common and stall this progress
Amazon Web Services (AWS) has become the lifeblood of millions of modern businesses, both big and small. But while this popular cloud platform enables them to manage and scale their operations with impressive speed, simplicity and affordability, it also represents a significant security and privacy risk if mismanaged by users.

Quality of faculty: that’s what compelled Matthew Belcher, an independent freelance web developer and student working towards a Bachelor in Computer Science, to enroll in the Open Institute of Technology (OPIT), allowing him to work with some of the top professionals in his chosen industry.
Matthew was recently elected program representative for the course, helping to guide the interests of over 100 students across the globe in discussions with the area chair and director to ensure that the course is fit for purpose and meets student needs.
Why OPIT?
In a recent interview, Matthew told us that the reason he chose OPIT to continue his education was the expertise of the faculty. He shared that he conducted intensive research into a variety of programs that would help him shape his interest and skill in technology into potential career opportunities.
Matthew felt that the OPIT faculty stood out as not just great teachers, but top-tier professionals with real-world experience in the kinds of industries he wants to work in. Their professional roles mean they offer an up-to-the-minute understanding of the changing market.
The Computer Science Program
The BSc in Computer Science is a fully accredited course delivered completely online over six 13-week terms.
The program delivers foundational skills, both theoretical and practical, in all aspects of computer science, including programming, software development, databases, cloud computing, cybersecurity, data science, and artificial intelligence. Students deliver a dissertation or project in the final term, and in term five, they can choose five electives from a pool of 27, or choose to specialize in one of five fields:
- Data Science and Artificial Intelligence
- Cloud Computing
- Cybersecurity
- Metaverse and Gaming
- Full Stack Development
The BSc in Computer Science is aimed at students interested in software development and engineering, data science, web development, app and game development, IT business analysis, cybersecurity, and database architecture.
Meet the Faculty
Now let’s meet some of the incredible faculty who influenced Matthew’s decision to select OPIT.
Art Sedighi
As well as teaching at OPIT, Art Sedighi is an adjunct professor at Johns Hopkins University and a professor at Purdue University Global. He was previously a partner at CDI Global, a senior cybersecurity solution architect at Amazon AWS, and head of high-performance computing and grid engineering at the Bank of America.
Sedighi has 20 years of experience planning, designing, developing, and having end-to-end ownership of cloud solutions. He has managed cross-functional teams and been responsible for driving adoption strategies across enterprises.
Sedighi teaches software engineering and cloud adoption in the BSc in Computer Science and is also a professor in the BSc in Digital Business program.
Lokesh Vij
As well as teaching in OPIT’s Computer Science Program, Lokesh is a database architect at Broadcom Inc., a semiconductor manufacturing firm in Canada, and a part-time professor at Seneca Polytechnic in Toronto.
Vij describes himself as a cloud and data evangelist, educator, and mentor. With over 20 years of rich experience, he has designed, developed, and delivered enterprise-scale data solutions across disparate source systems, data formats, and relational and non-relational databases.
His proven expertise includes enhancing business profitability by leveraging insights derived from diverse data sets, constructing decision support systems that convert transactional data into analytical formats. This, combined with data visualization techniques, allows him to articulate and clarify business insights effectively.
In the BSc in Computer Science program, he teaches cloud computing infrastructure, cloud development, cloud computing automation and ops, and cloud data stacks. He is also a professor of the BSc in Digital Business and the MSc in Applied Data Science & AI, also covering big data and cloud computing.
Tom Vazdar
Tom Vazdar is OPIT’s area chair for cybersecurity and also the CEO of Riskoria, which focuses on cybersecurity strategies. His areas of expertise include strategy services, AI and cybersecurity, AI infrastructure development, and compliance assurance and risk management.
Vazdar teaches computer security in both the BSc in Computer Science and the BSc in Digital Business programs, as well as for the MSc in Enterprise Cybersecurity.
Sylvester Kaczmarek
Sylvester Kaczmarek is a former Chief Science Officer at WeSpace Technologies and a former AI Mentor and Researcher at NASA. He is now an independent executive leading secure, safe, and resilient AI-driven space innovations, a science communicator, and an advisor to deep tech investors, governments, and startups.
In the past, he has specialized in the integration of artificial intelligence, robotics, cybersecurity, and edge computing in aerospace applications.
In OPIT’s BSc in Computer Science program, Kaczmarek teaches cartography and secure communications, secure software development, and parallel and distributed computing. He also teaches in the MSc in Enterprise Cybersecurity program.
Lorenzo Marvardi
As well as teaching at OPIT, Lorenzo Marvardi is a managing director at Accenture. With a background in cybersecurity management and security consulting, he has 15 years of experience as a security professional and executive, implementing security programs across different industries and geographies.
Marvardi teaches cybersecurity in both the BSc in Computer Science and the BSc in Digital Business programs.
Khaled Elbehiery
Khaled Elbehiery is a senior director and network engineer at Charter Communications. As well as teaching for OPIT, he is a part-time professor at Park University and DeVry University, both in the United States.
He describes himself as a professor, scientist, inventor, and author. He has publications on cloud engineering, quantum computing, space, robotics, bionics, microgravity, and modern educational methodologies.
Elbehiery teaches cloud and IoT security and computer networks on both the BSc in Computer Science and the MSc in Enterprise Cybersecurity programs.
Francesco Derchi
Francesco Derchi is a brand, innovation, and digital expert with over 14 years of experience. He is chair of digital business at OPIT, an advisory board member for Arsene Lippens, on the faculty at EHL in Switzerland and the Universita degli Studi di Genova, plus course director and contributor to the Harvard Business Review Italia.
Derchi teaches business strategy and digital marketing on both the BSc in Computer Science and the BSc in Digital Business, as well as on the MSc in Digital Business and Innovation.

In April 2025, the Open Institute of Technology (OPIT) invited Professor Andrea Gozzi, Head of Strategy at Partnership for the Digital Industries Ecosystem at Siemens, Italy, to talk about how new technologies are transforming industry.
Industry Is Driving Technological Innovation
According to Gozzi, who teaches in the OPIT BSc in Digital Business and MSc in Digital Business and Innovation programs, many of the young people he meets imagine that the development of technology like artificial intelligence (AI) and the Internet of Things (IoT) is being led by digital companies like Amazon and Apple. But, he adds, they haven’t really considered how industry is utilizing and leading in these fields.
Industry includes markets such as energy, aviation, and manufacturing. Gozzi explains how new technologies are transforming these industries and, in turn, how these industries are pioneering new technologies. As a result, industry represents a growing job market, especially for OPIT graduates.
Challenges to Industry
Gozzi started his discussion by explaining the modern challenges facing the industry today and how these challenges necessitate innovation. He identified three principal challenges:
- Geopolitical instability events like the war in Ukraine and changing tariffs are undermining supply lines, complicating contracts, and making sales unpredictable.
- Labor shortages are being caused by changing demographics and changing culture. Gozzi explained that in the past, people worked for one company for their entire lives, learning specialist skills and adding value throughout their working lives. Today, people tend to change jobs several times throughout their working lives, so to maximize their value, they need to be brought up to speed quicker, and companies need strategies to retain knowledge despite labor churn.
- Sustainability pressures are on the rise both in terms of meeting green regulations and ensuring that expensive factories provide a long-term return on investment.
To adapt to these challenges, industry is looking to develop innovative technological solutions.
Opportunities: AI, Digital Twins, Industrial IoT, and Robotics
Having set the scene, Gozzi dove into the types of technologies that industry is pioneering to adapt to challenge, and how they are pushing these technologies beyond their mainstream potential.
Artificial Intelligence (AI)
Gozzi explained how AI promises to be an industry game changer, especially in the quest to go from automated to adaptive manufacturing. Automated manufacturing can create products based on existing programming, recreating the same product perfectly every time while eliminating human error.
But the goal is to create machines that don’t just create one thing but can create anything, adapting as designs change. Adaptive manufacturing is one of the core goals of Industry 4.0, which is the next Industrial Revolution that is seeing automated, interconnected factories. Gozzi shared a compelling example of applications by Rolls-Royce for airplane engine production.
AI powers machines to handle more complex tasks independently, which requires them to be networked and connected. As a result, factories can collect data from every area and use it to optimize supply chains and resource allocation. This enables predictive maintenance, reducing downtime by around 50%, and scheduling activity, reducing yield losses by around 40%. Since factories represent major investments, maximizing productivity is essential to achieving a strong return on investment (ROI).
This same data is increasingly being used to iterate and test prototypes in the digital twin environment.
Digital Twins
According to Gozzi, today’s factories are created with intelligent interconnected machines that are adaptable and extendable and can provide strategic insights. The entire manufacturing system is connected by silent threads of information that can turn siloed data into a much larger picture.
These threads form the neural network of digital twins, which are virtual replicas of physical systems that can be used to monitor and optimize operations, simulate actions before execution, and accelerate innovation. New production techniques and designs can be tested virtually before being tested in the real world, allowing processes to be optimized for quality and efficiency. Here, Gozzi shared a compelling example from Siemens’ accelerator ecosystem which creates replicas of factories in the metaverse.
IIoT and Robotics
In the factories themselves, manufacturers rely on advanced robotics connected by the Industrial Internet of Things (IIoT) to carry out tasks independently. IIoT ensures that a centralized intelligence integrates every element of production in real-time. This not only allows robots to do their jobs but also ensures that issues can be identified and solutions executed through the digital twin. The digital twin can also be used to teach robots new tasks tested and piloted virtually in the digital twin.
Meanwhile, robots must be mobile, AI-driven, and integrated into the factory’s ecosystem. It is AI that has enabled robots to become aware of their environment, enabling new behaviors such as picking pieces from a non-uniform set and placing them where they are required. This may seem small, but it is something that robots have never been able to do and represents one of the biggest recent breakthroughs in manufacturing.
AI is also enabling robots to safely work alongside human workers in manufacturing settings. These collaborative robots, called “cobots,” are another recent development that promises to be transformative.
Cybersecurity
Gozzi explained that all this requires advanced connectivity and a closed network with a “zero trust” architecture to mitigate security threats. Hackers, viruses, and other network failures don’t just represent the potential loss of valuable and confidential data, but also a major safety risk if cobots are interfered with and begin to act unsafely.
Powerful and secure industrial closed networks are being enabled by 5G, but these networks still need to be monitored by cybersecurity continuously in real-time.
The Future of Industry
Industry is modernizing quickly, and technology is playing an increasingly important role. This is manifesting in the development of smart adaptive factories using a mix of AI, advanced robotics, and digital twins. Industrial IoT, strong connected networks, and advanced cybersecurity support this. The demand for talent that understands these technologies and their applications will only increase in the coming years.
Moreover, Gozzi reassures students that there is no question that technology will replace real people in some positions. Humans will still be needed to collaborate and work side by side with robots. This will require people to learn new skills, in particular, a data-driven mindset and approach to problem-solving.
Both OPIT’s BSc in Digital Business and MSc in Digital Business and Innovation are designed to prepare students for these kinds of careers.

Source:
- Agenda Digitale, published on May 16th, 2025
By Riccardo Ocleppo, Founder and Director of OPIT – Open Institute of Technology
AI ethics requires ongoing commitment. Organizations must integrate guidelines and a corporate culture geared towards responsibility and inclusiveness, preventing negative consequences for individuals and society.
In the world of artificial intelligence, concerns about algorithmic bias are coming to the forefront, calling for a collective effort to promote ethical practices in the development and use of AI.
This implies the need to understand the multiple causes and potential consequences of the biases themselves, identify concrete solutions and recognize the key role of academic institutions in this process.
Bias in AI is a form of injustice, often systemic, that can be embedded in algorithms. Its origins are many, but the main culprit is almost always the data set used to train the models. If this data reflects inequalities or prejudices present in society, the risk is that AI will absorb and reproduce them, consolidating these distortions.
But bias can also manifest itself in the opposite direction. This is what happened some time ago with Google Gemini. The generative AI system developed by Google, in an attempt to ensure greater inclusivity, ended up generating content and images completely disconnected from the reality it was supposed to represent.
Further complicating the picture is the very nature of AI models, which are often characterized by complex algorithms and opaque decision-making processes. This complexity makes it difficult to identify, and therefore correct, biases inherent in the systems.
Ethical Data Management to Reduce Bias in AI
Adopting good data management practices is essential to address these issues. The first step is to ensure that the datasets used for training are diverse and representative. This means actively seeking data that includes a wide variety of demographic, cultural, and social contexts, so as to avoid AI exclusively reproducing existing and potentially biased models.
Alongside data diversification, it is equally important to test models on different demographic groups. Only in this way can latent biases that would otherwise remain invisible be highlighted. Furthermore, promoting transparency in algorithms and decision-making processes is crucial. Transparency allows for critical control and makes all actors involved in the design and use of AI accountable.
Strategies for ethical and responsible artificial intelligence
Building ethical AI is not an isolated action, but an ongoing journey that requires constant attention and updating. This commitment is divided into several fundamental steps. First, ethical guidelines must be defined. Organizations must clearly establish the ethical standards to follow in the development and use of AI, inspired by fundamental values such as fairness, responsibility and transparency. These principles serve as a compass to guide all projects.
It is also essential to include a plurality of perspectives in the development of AI. Multidisciplinary teams, composed of technologists, ethicists, sociologists and representatives of the potentially involved communities, can help prevent and correct biases thanks to the variety of approaches. Last but not least, promote an ethical culture : in addition to establishing rules and composing diverse teams, it is essential to cultivate a corporate culture that places ethics at the center of every project. Only by integrating these values in the DNA of the organization can we ensure that ethics is a founding element of the development of AI.
The consequences of biased artificial intelligence
Ignoring the problem of bias can have serious and unpredictable consequences, with profound impacts on different areas of our lives. From the reinforcement of social inequalities to the loss of trust in AI-based systems, the risk is to fuel skepticism and resistance towards technological innovation. AI, if distorted, can negatively influence crucial decisions in sectors such as healthcare, employment and justice. Think, for example, of loan selection algorithms that unfairly penalize certain categories, or facial recognition software that incorrectly identifies people, with possible legal consequences. These are just some of the situations in which an unethical use of AI can worsen existing inequalities.
University training and research to counter bias in AI
Universities and higher education institutions have a crucial responsibility to address bias and promote ethical practices in AI development. Ethics must certainly be integrated into educational curricula. By including ethics modules in AI and computer science courses, universities can provide new generations of developers with the tools to recognize and address bias, contributing to more equitable and inclusive design. Universities can also be protagonists through research.
Academic institutions, with their autonomy and expertise, can explore the complexities of bias in depth, developing innovative solutions for detecting and mitigating bias. Since the topic of bias is multidimensional in nature, a collaborative approach is needed, thus fostering interdisciplinary collaboration. Universities can create spaces where computer scientists, ethicists, lawyers, and social scientists work together, offering more comprehensive and innovative solutions.
But that’s not all. As places of critical thinking and debate, universities can foster dialogue between developers, policy makers, and citizens through events, workshops, and conferences. This engagement is essential to raise awareness and promote responsible use of AI.
In this direction, several universities have already activated degree courses in artificial intelligence that combine advanced technical skills (in areas such as machine learning, computer vision and natural language processing) with training that is attentive to ethical and human implications.
Academic Opportunities for an Equitable AI Future
More and more universities around the world – including Yale and Oxford – are also creating research departments dedicated to AI and ethics.
The path to ethical AI is complex, but it also represents an opportunity to build a future where technology truly serves the common good.
By recognizing the root causes of bias , adopting responsible data practices, and engaging in ongoing and vigilant development, we can reduce the unintended effects of biased algorithms. In this process, academic institutions – thanks to their expertise and authority – are at the forefront, helping to shape a more equitable and inclusive digital age.
Read the full article below:

Source:
- TechFinancials, published on May 16th, 2025
By Zorina Alliata
Artificial intelligence (AI) used to be the stuff of science fiction. Stories about rogue machines and robot uprisings were once a source of amusement, not anxiety. But over recent years, AI has quietly embedded itself in our daily lives.
From the algorithms behind social media feeds, to the voice assistants managing our calendars. This quiet takeover has become something far louder: fear.
Headlines around AI are often alarmist. Statements such as “AI will take your job”, “AI will end education”, or “AI is dangerous and unregulated” are thrown around regularly. These narratives feed on uncertainty and fuel distrust.
But it doesn’t have to be this way. The hyper-fixation on the never-ending negative aspects of AI is the wrong approach to take. What if AI isn’t the villain? What if, at this stage, it’s simply a child?
AI, in many ways, is still learning. It mimics human behaviour, absorbs language, and forms patterns based on what it sees. Its current capabilities, however powerful they may seem, are not equivalent to human intelligence. It has limitations. It makes mistakes. It can even be manipulated and misled. It reflects our world; flaws and all. In that sense, AI is less an omnipotent force and more in line with a toddler trying to find its way.
And, like any child, it needs guidance.
This is especially evident in education. The emergence of AI tools such as ChatGPT has caused a stir in higher education institutions and universities, sparking fears about plagiarism and the erosion of critical thinking. Some institutions have responded with strict bans, while others have embraced cautious integration. The panic is understandable, but is it misplaced?
Rather than jumping to conclusions, educators should consider shifting the conversation. AI can, in fact, become an ally in learning. Instead of assuming students will cheat, we can teach them to use AI responsibly. Most of us educators can already recognise the signs of AI-generated work: excessive use of numbered lists, repetitive language and poor comparison skills. So why not use this as a teaching opportunity?
Encouraging students to engage with AI critically, understanding what it’s good at, where it falls short, and how to improve its output, can strengthen their own analytical skills. It invites them to become more active participants in their learning, not passive consumers of machine generated answers. Teaching young people how to work with AI is arguably more important than shielding them from it.
Outside the classroom, AI’s impact on the workforce is another growing concern. Stories about AI replacing jobs often dominate the news cycle. But these conversations often ignore a key point: AI is not autonomous. AI needs human designers, engineers, analysts, and ethicists to guide it. For every job that AI may eliminate, others will emerge to support and direct it.
More importantly, there are many things AI simply cannot do. It doesn’t understand nuance, morality or emotion. It can’t make ethical decisions without human input. These aren’t minor gaps, they’re fundamental. That’s why we must stop imagining AI as an unstoppable force and start thinking about how to raise it responsibly.
When considering how to raise our AI child responsibly, we need to acknowledge the issue of the algorithm being biased. Critics often point out that AI reproduces prejudices and errors, and whilst this is true, the source of that bias is us. It is important to remember that AI learns from historical data created by us, much of which reflects deeply ingrained societal inequalities.
Take, for example, mortgage lending in the US, where decades of discriminatory practices have skewed the data. Unless we intervene, AI trained on this information will inevitably reflect those same biases.
That’s not a reason to reject AI. It’s a reason to be more involved in its development, like any good parent. The responsibility lies with us.
Parenting is not about control for control’s sake; it’s about nurturing growth while setting boundaries. AI, like a child, needs feedback, accountability, and care. It will grow, but how it grows is up to us.
It’s tempting to view technology as something that happens to us, rather than something we can shape. But AI doesn’t exist outside of society, it’s a product of our values, decisions, and input. If we treat it as a monster, it may become one. If we treat it as a mirror, it will reflect what we show it. And if we treat it as a child, we may be able to raise it into something better.
So instead of fearmongering, let’s ask ourselves a better question: Are we raising AI correctly?
- Zorina Alliata, Professor of Responsible AI at OPIT– Open Institute of Technology
Read the full article below:

Source:
- Wired, published on May 01st, 2025
People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.
By Kate O’Flaherty
At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.
All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.
The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.
Hidden Data
The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”
OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”
It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.
This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.
OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.
Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.
Uncanny Likeness
In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.
However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.
Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.
The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”
OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.
Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.
ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data by default, according to the company.
Read the full article below:
Have questions?
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We are international
We can speak in: