Read the full article below:
Search inside The Magazine

Quality of faculty: that’s what compelled Matthew Belcher, an independent freelance web developer and student working towards a Bachelor in Computer Science, to enroll in the Open Institute of Technology (OPIT), allowing him to work with some of the top professionals in his chosen industry.
Matthew was recently elected program representative for the course, helping to guide the interests of over 100 students across the globe in discussions with the area chair and director to ensure that the course is fit for purpose and meets student needs.
Why OPIT?
In a recent interview, Matthew told us that the reason he chose OPIT to continue his education was the expertise of the faculty. He shared that he conducted intensive research into a variety of programs that would help him shape his interest and skill in technology into potential career opportunities.
Matthew felt that the OPIT faculty stood out as not just great teachers, but top-tier professionals with real-world experience in the kinds of industries he wants to work in. Their professional roles mean they offer an up-to-the-minute understanding of the changing market.
The Computer Science Program
The BSc in Computer Science is a fully accredited course delivered completely online over six 13-week terms.
The program delivers foundational skills, both theoretical and practical, in all aspects of computer science, including programming, software development, databases, cloud computing, cybersecurity, data science, and artificial intelligence. Students deliver a dissertation or project in the final term, and in term five, they can choose five electives from a pool of 27, or choose to specialize in one of five fields:
- Data Science and Artificial Intelligence
- Cloud Computing
- Cybersecurity
- Metaverse and Gaming
- Full Stack Development
The BSc in Computer Science is aimed at students interested in software development and engineering, data science, web development, app and game development, IT business analysis, cybersecurity, and database architecture.
Meet the Faculty
Now let’s meet some of the incredible faculty who influenced Matthew’s decision to select OPIT.
Art Sedighi
As well as teaching at OPIT, Art Sedighi is an adjunct professor at Johns Hopkins University and a professor at Purdue University Global. He was previously a partner at CDI Global, a senior cybersecurity solution architect at Amazon AWS, and head of high-performance computing and grid engineering at the Bank of America.
Sedighi has 20 years of experience planning, designing, developing, and having end-to-end ownership of cloud solutions. He has managed cross-functional teams and been responsible for driving adoption strategies across enterprises.
Sedighi teaches software engineering and cloud adoption in the BSc in Computer Science and is also a professor in the BSc in Digital Business program.
Lokesh Vij
As well as teaching in OPIT’s Computer Science Program, Lokesh is a database architect at Broadcom Inc., a semiconductor manufacturing firm in Canada, and a part-time professor at Seneca Polytechnic in Toronto.
Vij describes himself as a cloud and data evangelist, educator, and mentor. With over 20 years of rich experience, he has designed, developed, and delivered enterprise-scale data solutions across disparate source systems, data formats, and relational and non-relational databases.
His proven expertise includes enhancing business profitability by leveraging insights derived from diverse data sets, constructing decision support systems that convert transactional data into analytical formats. This, combined with data visualization techniques, allows him to articulate and clarify business insights effectively.
In the BSc in Computer Science program, he teaches cloud computing infrastructure, cloud development, cloud computing automation and ops, and cloud data stacks. He is also a professor of the BSc in Digital Business and the MSc in Applied Data Science & AI, also covering big data and cloud computing.
Tom Vazdar
Tom Vazdar is OPIT’s area chair for cybersecurity and also the CEO of Riskoria, which focuses on cybersecurity strategies. His areas of expertise include strategy services, AI and cybersecurity, AI infrastructure development, and compliance assurance and risk management.
Vazdar teaches computer security in both the BSc in Computer Science and the BSc in Digital Business programs, as well as for the MSc in Enterprise Cybersecurity.
Sylvester Kaczmarek
Sylvester Kaczmarek is a former Chief Science Officer at WeSpace Technologies and a former AI Mentor and Researcher at NASA. He is now an independent executive leading secure, safe, and resilient AI-driven space innovations, a science communicator, and an advisor to deep tech investors, governments, and startups.
In the past, he has specialized in the integration of artificial intelligence, robotics, cybersecurity, and edge computing in aerospace applications.
In OPIT’s BSc in Computer Science program, Kaczmarek teaches cartography and secure communications, secure software development, and parallel and distributed computing. He also teaches in the MSc in Enterprise Cybersecurity program.
Lorenzo Marvardi
As well as teaching at OPIT, Lorenzo Marvardi is a managing director at Accenture. With a background in cybersecurity management and security consulting, he has 15 years of experience as a security professional and executive, implementing security programs across different industries and geographies.
Marvardi teaches cybersecurity in both the BSc in Computer Science and the BSc in Digital Business programs.
Khaled Elbehiery
Khaled Elbehiery is a senior director and network engineer at Charter Communications. As well as teaching for OPIT, he is a part-time professor at Park University and DeVry University, both in the United States.
He describes himself as a professor, scientist, inventor, and author. He has publications on cloud engineering, quantum computing, space, robotics, bionics, microgravity, and modern educational methodologies.
Elbehiery teaches cloud and IoT security and computer networks on both the BSc in Computer Science and the MSc in Enterprise Cybersecurity programs.
Francesco Derchi
Francesco Derchi is a brand, innovation, and digital expert with over 14 years of experience. He is chair of digital business at OPIT, an advisory board member for Arsene Lippens, on the faculty at EHL in Switzerland and the Universita degli Studi di Genova, plus course director and contributor to the Harvard Business Review Italia.
Derchi teaches business strategy and digital marketing on both the BSc in Computer Science and the BSc in Digital Business, as well as on the MSc in Digital Business and Innovation.

In April 2025, the Open Institute of Technology (OPIT) invited Professor Andrea Gozzi, Head of Strategy at Partnership for the Digital Industries Ecosystem at Siemens, Italy, to talk about how new technologies are transforming industry.
Industry Is Driving Technological Innovation
According to Gozzi, who teaches in the OPIT BSc in Digital Business and MSc in Digital Business and Innovation programs, many of the young people he meets imagine that the development of technology like artificial intelligence (AI) and the Internet of Things (IoT) is being led by digital companies like Amazon and Apple. But, he adds, they haven’t really considered how industry is utilizing and leading in these fields.
Industry includes markets such as energy, aviation, and manufacturing. Gozzi explains how new technologies are transforming these industries and, in turn, how these industries are pioneering new technologies. As a result, industry represents a growing job market, especially for OPIT graduates.
Challenges to Industry
Gozzi started his discussion by explaining the modern challenges facing the industry today and how these challenges necessitate innovation. He identified three principal challenges:
- Geopolitical instability events like the war in Ukraine and changing tariffs are undermining supply lines, complicating contracts, and making sales unpredictable.
- Labor shortages are being caused by changing demographics and changing culture. Gozzi explained that in the past, people worked for one company for their entire lives, learning specialist skills and adding value throughout their working lives. Today, people tend to change jobs several times throughout their working lives, so to maximize their value, they need to be brought up to speed quicker, and companies need strategies to retain knowledge despite labor churn.
- Sustainability pressures are on the rise both in terms of meeting green regulations and ensuring that expensive factories provide a long-term return on investment.
To adapt to these challenges, industry is looking to develop innovative technological solutions.
Opportunities: AI, Digital Twins, Industrial IoT, and Robotics
Having set the scene, Gozzi dove into the types of technologies that industry is pioneering to adapt to challenge, and how they are pushing these technologies beyond their mainstream potential.
Artificial Intelligence (AI)
Gozzi explained how AI promises to be an industry game changer, especially in the quest to go from automated to adaptive manufacturing. Automated manufacturing can create products based on existing programming, recreating the same product perfectly every time while eliminating human error.
But the goal is to create machines that don’t just create one thing but can create anything, adapting as designs change. Adaptive manufacturing is one of the core goals of Industry 4.0, which is the next Industrial Revolution that is seeing automated, interconnected factories. Gozzi shared a compelling example of applications by Rolls-Royce for airplane engine production.
AI powers machines to handle more complex tasks independently, which requires them to be networked and connected. As a result, factories can collect data from every area and use it to optimize supply chains and resource allocation. This enables predictive maintenance, reducing downtime by around 50%, and scheduling activity, reducing yield losses by around 40%. Since factories represent major investments, maximizing productivity is essential to achieving a strong return on investment (ROI).
This same data is increasingly being used to iterate and test prototypes in the digital twin environment.
Digital Twins
According to Gozzi, today’s factories are created with intelligent interconnected machines that are adaptable and extendable and can provide strategic insights. The entire manufacturing system is connected by silent threads of information that can turn siloed data into a much larger picture.
These threads form the neural network of digital twins, which are virtual replicas of physical systems that can be used to monitor and optimize operations, simulate actions before execution, and accelerate innovation. New production techniques and designs can be tested virtually before being tested in the real world, allowing processes to be optimized for quality and efficiency. Here, Gozzi shared a compelling example from Siemens’ accelerator ecosystem which creates replicas of factories in the metaverse.
IIoT and Robotics
In the factories themselves, manufacturers rely on advanced robotics connected by the Industrial Internet of Things (IIoT) to carry out tasks independently. IIoT ensures that a centralized intelligence integrates every element of production in real-time. This not only allows robots to do their jobs but also ensures that issues can be identified and solutions executed through the digital twin. The digital twin can also be used to teach robots new tasks tested and piloted virtually in the digital twin.
Meanwhile, robots must be mobile, AI-driven, and integrated into the factory’s ecosystem. It is AI that has enabled robots to become aware of their environment, enabling new behaviors such as picking pieces from a non-uniform set and placing them where they are required. This may seem small, but it is something that robots have never been able to do and represents one of the biggest recent breakthroughs in manufacturing.
AI is also enabling robots to safely work alongside human workers in manufacturing settings. These collaborative robots, called “cobots,” are another recent development that promises to be transformative.
Cybersecurity
Gozzi explained that all this requires advanced connectivity and a closed network with a “zero trust” architecture to mitigate security threats. Hackers, viruses, and other network failures don’t just represent the potential loss of valuable and confidential data, but also a major safety risk if cobots are interfered with and begin to act unsafely.
Powerful and secure industrial closed networks are being enabled by 5G, but these networks still need to be monitored by cybersecurity continuously in real-time.
The Future of Industry
Industry is modernizing quickly, and technology is playing an increasingly important role. This is manifesting in the development of smart adaptive factories using a mix of AI, advanced robotics, and digital twins. Industrial IoT, strong connected networks, and advanced cybersecurity support this. The demand for talent that understands these technologies and their applications will only increase in the coming years.
Moreover, Gozzi reassures students that there is no question that technology will replace real people in some positions. Humans will still be needed to collaborate and work side by side with robots. This will require people to learn new skills, in particular, a data-driven mindset and approach to problem-solving.
Both OPIT’s BSc in Digital Business and MSc in Digital Business and Innovation are designed to prepare students for these kinds of careers.

Source:
- Agenda Digitale, published on May 16th, 2025
By Riccardo Ocleppo, Founder and Director of OPIT – Open Institute of Technology
AI ethics requires ongoing commitment. Organizations must integrate guidelines and a corporate culture geared towards responsibility and inclusiveness, preventing negative consequences for individuals and society.
In the world of artificial intelligence, concerns about algorithmic bias are coming to the forefront, calling for a collective effort to promote ethical practices in the development and use of AI.
This implies the need to understand the multiple causes and potential consequences of the biases themselves, identify concrete solutions and recognize the key role of academic institutions in this process.
Bias in AI is a form of injustice, often systemic, that can be embedded in algorithms. Its origins are many, but the main culprit is almost always the data set used to train the models. If this data reflects inequalities or prejudices present in society, the risk is that AI will absorb and reproduce them, consolidating these distortions.
But bias can also manifest itself in the opposite direction. This is what happened some time ago with Google Gemini. The generative AI system developed by Google, in an attempt to ensure greater inclusivity, ended up generating content and images completely disconnected from the reality it was supposed to represent.
Further complicating the picture is the very nature of AI models, which are often characterized by complex algorithms and opaque decision-making processes. This complexity makes it difficult to identify, and therefore correct, biases inherent in the systems.
Ethical Data Management to Reduce Bias in AI
Adopting good data management practices is essential to address these issues. The first step is to ensure that the datasets used for training are diverse and representative. This means actively seeking data that includes a wide variety of demographic, cultural, and social contexts, so as to avoid AI exclusively reproducing existing and potentially biased models.
Alongside data diversification, it is equally important to test models on different demographic groups. Only in this way can latent biases that would otherwise remain invisible be highlighted. Furthermore, promoting transparency in algorithms and decision-making processes is crucial. Transparency allows for critical control and makes all actors involved in the design and use of AI accountable.
Strategies for ethical and responsible artificial intelligence
Building ethical AI is not an isolated action, but an ongoing journey that requires constant attention and updating. This commitment is divided into several fundamental steps. First, ethical guidelines must be defined. Organizations must clearly establish the ethical standards to follow in the development and use of AI, inspired by fundamental values such as fairness, responsibility and transparency. These principles serve as a compass to guide all projects.
It is also essential to include a plurality of perspectives in the development of AI. Multidisciplinary teams, composed of technologists, ethicists, sociologists and representatives of the potentially involved communities, can help prevent and correct biases thanks to the variety of approaches. Last but not least, promote an ethical culture : in addition to establishing rules and composing diverse teams, it is essential to cultivate a corporate culture that places ethics at the center of every project. Only by integrating these values in the DNA of the organization can we ensure that ethics is a founding element of the development of AI.
The consequences of biased artificial intelligence
Ignoring the problem of bias can have serious and unpredictable consequences, with profound impacts on different areas of our lives. From the reinforcement of social inequalities to the loss of trust in AI-based systems, the risk is to fuel skepticism and resistance towards technological innovation. AI, if distorted, can negatively influence crucial decisions in sectors such as healthcare, employment and justice. Think, for example, of loan selection algorithms that unfairly penalize certain categories, or facial recognition software that incorrectly identifies people, with possible legal consequences. These are just some of the situations in which an unethical use of AI can worsen existing inequalities.
University training and research to counter bias in AI
Universities and higher education institutions have a crucial responsibility to address bias and promote ethical practices in AI development. Ethics must certainly be integrated into educational curricula. By including ethics modules in AI and computer science courses, universities can provide new generations of developers with the tools to recognize and address bias, contributing to more equitable and inclusive design. Universities can also be protagonists through research.
Academic institutions, with their autonomy and expertise, can explore the complexities of bias in depth, developing innovative solutions for detecting and mitigating bias. Since the topic of bias is multidimensional in nature, a collaborative approach is needed, thus fostering interdisciplinary collaboration. Universities can create spaces where computer scientists, ethicists, lawyers, and social scientists work together, offering more comprehensive and innovative solutions.
But that’s not all. As places of critical thinking and debate, universities can foster dialogue between developers, policy makers, and citizens through events, workshops, and conferences. This engagement is essential to raise awareness and promote responsible use of AI.
In this direction, several universities have already activated degree courses in artificial intelligence that combine advanced technical skills (in areas such as machine learning, computer vision and natural language processing) with training that is attentive to ethical and human implications.
Academic Opportunities for an Equitable AI Future
More and more universities around the world – including Yale and Oxford – are also creating research departments dedicated to AI and ethics.
The path to ethical AI is complex, but it also represents an opportunity to build a future where technology truly serves the common good.
By recognizing the root causes of bias , adopting responsible data practices, and engaging in ongoing and vigilant development, we can reduce the unintended effects of biased algorithms. In this process, academic institutions – thanks to their expertise and authority – are at the forefront, helping to shape a more equitable and inclusive digital age.

Source:
- TechFinancials, published on May 16th, 2025
By Zorina Alliata
Artificial intelligence (AI) used to be the stuff of science fiction. Stories about rogue machines and robot uprisings were once a source of amusement, not anxiety. But over recent years, AI has quietly embedded itself in our daily lives.
From the algorithms behind social media feeds, to the voice assistants managing our calendars. This quiet takeover has become something far louder: fear.
Headlines around AI are often alarmist. Statements such as “AI will take your job”, “AI will end education”, or “AI is dangerous and unregulated” are thrown around regularly. These narratives feed on uncertainty and fuel distrust.
But it doesn’t have to be this way. The hyper-fixation on the never-ending negative aspects of AI is the wrong approach to take. What if AI isn’t the villain? What if, at this stage, it’s simply a child?
AI, in many ways, is still learning. It mimics human behaviour, absorbs language, and forms patterns based on what it sees. Its current capabilities, however powerful they may seem, are not equivalent to human intelligence. It has limitations. It makes mistakes. It can even be manipulated and misled. It reflects our world; flaws and all. In that sense, AI is less an omnipotent force and more in line with a toddler trying to find its way.
And, like any child, it needs guidance.
This is especially evident in education. The emergence of AI tools such as ChatGPT has caused a stir in higher education institutions and universities, sparking fears about plagiarism and the erosion of critical thinking. Some institutions have responded with strict bans, while others have embraced cautious integration. The panic is understandable, but is it misplaced?
Rather than jumping to conclusions, educators should consider shifting the conversation. AI can, in fact, become an ally in learning. Instead of assuming students will cheat, we can teach them to use AI responsibly. Most of us educators can already recognise the signs of AI-generated work: excessive use of numbered lists, repetitive language and poor comparison skills. So why not use this as a teaching opportunity?
Encouraging students to engage with AI critically, understanding what it’s good at, where it falls short, and how to improve its output, can strengthen their own analytical skills. It invites them to become more active participants in their learning, not passive consumers of machine generated answers. Teaching young people how to work with AI is arguably more important than shielding them from it.
Outside the classroom, AI’s impact on the workforce is another growing concern. Stories about AI replacing jobs often dominate the news cycle. But these conversations often ignore a key point: AI is not autonomous. AI needs human designers, engineers, analysts, and ethicists to guide it. For every job that AI may eliminate, others will emerge to support and direct it.
More importantly, there are many things AI simply cannot do. It doesn’t understand nuance, morality or emotion. It can’t make ethical decisions without human input. These aren’t minor gaps, they’re fundamental. That’s why we must stop imagining AI as an unstoppable force and start thinking about how to raise it responsibly.
When considering how to raise our AI child responsibly, we need to acknowledge the issue of the algorithm being biased. Critics often point out that AI reproduces prejudices and errors, and whilst this is true, the source of that bias is us. It is important to remember that AI learns from historical data created by us, much of which reflects deeply ingrained societal inequalities.
Take, for example, mortgage lending in the US, where decades of discriminatory practices have skewed the data. Unless we intervene, AI trained on this information will inevitably reflect those same biases.
That’s not a reason to reject AI. It’s a reason to be more involved in its development, like any good parent. The responsibility lies with us.
Parenting is not about control for control’s sake; it’s about nurturing growth while setting boundaries. AI, like a child, needs feedback, accountability, and care. It will grow, but how it grows is up to us.
It’s tempting to view technology as something that happens to us, rather than something we can shape. But AI doesn’t exist outside of society, it’s a product of our values, decisions, and input. If we treat it as a monster, it may become one. If we treat it as a mirror, it will reflect what we show it. And if we treat it as a child, we may be able to raise it into something better.
So instead of fearmongering, let’s ask ourselves a better question: Are we raising AI correctly?
- Zorina Alliata, Professor of Responsible AI at OPIT– Open Institute of Technology
Read the full article below:

Source:
- Wired, published on May 01st, 2025
People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.
By Kate O’Flaherty
At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.
All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.
The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.
Hidden Data
The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”
OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”
It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.
This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.
OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.
Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.
Uncanny Likeness
In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.
However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.
Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.
The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”
OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.
Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.
ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data by default, according to the company.
Read the full article below:

Source:
- LADBible and Yahoo News, published on May 01st, 2025
You’ve probably seen them all over Instagram
By James Moorhouse
Experts have warned against participating in a viral social media trend which sees people use ChatGPT to create an action figure version of themselves.
If you’ve spent any time whatsoever doomscrolling on Instagram or TikTok or dare I say it, LinkedIn recently, you’ll be all too aware of the viral trend.
Obviously, there’s nothing more entertaining and frivolous than seeing AI generated versions of your co-workers and their cute little laptops and piña coladas, but it turns out that it might not be the best idea to take part.
There may well be some benefits to artificial intelligence but often it can produce some pretty disturbing results. Earlier this year, a lad from Norway sued ChatGPT after it falsely claimed he had been convicted of killing two of his kids.
Unfortunately, if you don’t like AI, then you’re going to have to accept that it’s going to become a regular part of our lives. You only need to look at WhatsApp or Facebook messenger to realise that. But it’s always worth saying please and thank you to ChatGPT just in case society does collapse and the AI robots take over, in the hope that they treat you mercifully. Although it might cost them a little more electricity.
Anyway, in case you’re thinking of getting involved in this latest AI trend and sharing your face and your favourite hobbies with a high tech robot, maybe don’t. You don’t want to end up starring in your own Netflix series, à la Black Mirror.
Tom Vazdar, area chair for cybersecurity at Open Institute of Technology, spoke with Wired about some of the dangers of sharing personal details about yourself with AI.
Every time you upload an image to ChatGPT, you’re potentially handing over ‘an entire bundle of metadata’ he revealed.
Vazdar added: “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.
“Because platforms like ChatGPT operate conversationally, there’s also behavioural data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”
Essentially, if you upload a photo of your face, you’re not just giving AI access to your face, but also the whatever is in the background, such as the location or other people that might feature.
Vazdar concluded: “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
While we’re at it, maybe stop using ChatGPT for your university essays and general basic questions you can find the answer to on Google as well. The last thing you need is AI knowing you don’t know how to do something basic if it does takeover the world.
Read the full article below:
- LADBible and Yahoo News

It’s not uncommon to hear stories from people who have committed several years to obtaining a university degree, only to discover it doesn’t fit the purposes they need when entering the business world.
Why? Even though universities spend years developing their degree courses in areas such as economics, business, and biomedical science, it is challenging to keep up with the latest technological advancements due to the lengthy approval process and a lack of experts on staff.
Today, artificial intelligence (AI), big data, cloud computing, and cybersecurity are beginning to impact every aspect of our business lives, regardless of whether you work in a cutting-edge science lab or an antiquities museum. However, many graduates fail to leverage this new technology and adapt it to their careers.
This is why OPIT – the Open Institute of Technology – was born, to offer affordable and accessible courses that bridge the gap between what is taught in traditional universities and what the job market requires.
How Is the Job Market Changing?
According to the World Economic Forum’s Future of Jobs Report 2025, 92 million jobs will be displaced by new technologies, though 170 million new jobs will be created that utilize new technology.
The report suggests that 39% of the key skills required in the job market will change by 2030. These include hard technical skills and the soft skills needed to work in creative environments where change is a constant.
New job descriptions will look for big data specialists, fintech engineers, and AI and machine learning specialists. Additionally, employers will also be seeking creative thinkers who are flexible and agile, as well as resilient in the face of change.
Technology-focused jobs that are in increasing demand include:
- Machine Learning Engineer – Developing and refining algorithms that enable systems to learn from data and improve performance.
- Natural Language Processing Specialist – Developing chatbots that can understand users, communicate naturally, and provide valuable assistance.
- AI Ethicist – Ensuring that AI is developed and deployed with broader social, legal, and moral implications considered.
- Data Architect – Gathering raw data from different sources and designing infrastructure that consolidates this information and makes it usable.
- Chief Data Officer – Leading a company’s data collection and application strategy, ensuring data-driven decision-makers.
- Cybersecurity Engineer – Building information security systems and IT architecture, and protecting them from unauthorized access and cyberattacks.
Over the next few years, we can expect most jobs to require an understanding of the applications for cutting-edge technology, if not how to manage the technical backend. Leaders need to know how to implement AI and automation to save time and reduce errors. Researchers need to understand how to leverage data to reveal new findings, and everyone needs to understand how to work in secure digital environments.
The conclusion is that in tomorrow’s job market, workers will need to find the right balance of technical and human skills to thrive.
A New Approach to Learning Is Needed
Learning requires a fundamental change. Just as businesses need to be adaptable, places of higher learning need to be more adaptable too, keeping their offerings up-to-date and reducing the timescales required to accredit and deliver new courses fit for the current job market.
This aligns with OPIT’s mission to unlock progress and employment on a global scale by providing high-quality and affordable education in the field of technology.
How Does OPIT Work?
OPIT is accredited with the MFHEA (Malta Further and Higher Education Authority) in accordance with the European Qualifications Framework (EQF).
Working with an evolving faculty of experts, OPIT offers a technological education aligned with the current and future career market.
Currently, OPIT offers two Bachelor’s degrees:
- Digital Business – Focuses on merging business acumen with digital fluency, bridging the strategy-execution gap in the evolving digital age.
- Modern Computer Science – Establishes 360-degree foundation skills, both theoretical and applicative, in all aspects of today’s computer science. It includes programming, software development, the cloud, cybersecurity, data science, and AI.
OPIT also offers four Master’s degrees:
- Digital Business & Innovation – Empowers professionals to drive innovation by leveraging digital technologies and AI, covering topics such as strategy, digital marketing, customer value management, and AI applications.
- Responsible Artificial Intelligence – Combines technical expertise with a focus on the ethical implications of modern AI, including sustainability and environmental impact.
- Enterprise Cybersecurity – Integrates technical and managerial expertise, equipping students with the skills to implement security solutions and lead cybersecurity initiatives.
- Applied Data Science & AI – Focuses on the intersection between management and tech with no computer science prerequisites. It provides foundation applicative courses coupled with real-world business problems approached with data science and AI.
Courses offer flexible online learning, with both live online-native classes and recorded catch-up sessions. Every course is hands-on and career-aligned, preparing students for multiple career options while working with top professionals.
Current faculty members include Zorina Alliata, principal AI strategist at Amazon; Sylvester Kaczmarek, AI mentor and researcher at NASA; Andrea Gozzi, head of Strategy and Partnership for the Digital Industries Ecosystem at Siemens; and Raj Dasgupta, AI and machine learning scientist at the U.S. Naval Research Laboratory.
OPIT designs its courses to be accessible and affordable, with a dedicated career services department that offers one-on-one career coaching and advice.
Graduating From OPIT
OPIT recently held its first graduation ceremony for students in 2025. Students described their experience with OPIT as unique, innovative, and inspiring. Share the experience of OPIT’s very first graduates in the video here.
If you are curious to learn more about the OPIT student community, OPIT can connect you with a current student. Just reach out.

The world is rapidly changing. New technologies such as artificial intelligence (AI) are transforming our lives and work, redefining the definition of “essential office skills.”
So what essential skills do today’s workers need to thrive in a business world undergoing a major digital transformation? It’s a question that Alan Lerner, director at Toptal and lecturer at the Open Institute of Technology (OPIT), addressed in his recent online masterclass.
In a broad overview of the new office landscape, Lerner shares the essential skills leaders need to manage – including artificial intelligence – to keep abreast of trends.
Here are eight essential capabilities business leaders in the AI era need, according to Lerner, which he also detailed in OPIT’s recent Master’s in Digital Business and Innovation webinar.
An Adapting Professional Environment
Lerner started his discussion by quoting naturalist Charles Darwin.
“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.”
The quote serves to highlight the level of change that we are currently seeing in the professional world, said Lerner.
According to the World Economic Forum’s The Future of Jobs Report 2025, over the next five years 22% of the labor market will be affected by structural change – including job creation and destruction – and much of that change will be enabled by new technologies such as AI and robotics. They expect the displacement of 92 million existing jobs and the creation of 170 million new jobs by 2030.
While there will be significant growth in frontline jobs – such as delivery drivers, construction workers, and care workers – the fastest-growing jobs will be tech-related roles, including big data specialists, FinTech engineers, and AI and machine learning specialists, while the greatest decline will be in clerical and secretarial roles. The report also predicts that most workers can anticipate that 39% of their existing skill set will be transformed or outdated in five years.
Lerner also highlighted key findings in the Accenture Life Trends 2025 Report, which explores behaviors and attitudes related to business, technology, and social shifts. The report noted five key trends:
- Cost of Hesitation – People are becoming more wary of the information they receive online.
- The Parent Trap – Parents and governments are increasingly concerned with helping the younger generation shape a safe relationship with digital technology.
- Impatience Economy – People are looking for quick solutions over traditional methods to achieve their health and financial goals.
- The Dignity of Work – Employees desire to feel inspired, to be entrusted with agency, and to achieve a work-life balance.
- Social Rewilding – People seek to disconnect and focus on satisfying activities and meaningful interactions.
These are consumer and employee demands representing opportunities for change in the modern business landscape.
Key Capabilities for the AI Era
Businesses are using a variety of strategies to adapt, though not always strategically. According to McClean & Company’s HR Trends Report 2025, 42% of respondents said they are currently implementing AI solutions, but only 7% have a documented AI implementation strategy.
This approach reflects the newness of the technology, with many still unsure of the best way to leverage AI, but also feeling the pressure to adopt and adapt, experiment, and fail forward.
So, what skills do leaders need to lead in an environment with both transformation and uncertainty? Lerner highlighted eight essential capabilities, independent of technology.
Capability 1: Manage Complexity
Leaders need to be able to solve problems and make decisions under fast-changing conditions. This requires:
- Being able to look at and understand organizations as complex social-technical systems
- Keeping a continuous eye on change and adopting an “outside-in” vision of their organization
- Moving fast and fixing things faster
- Embracing digital literacy and technological capabilities
Capability 2: Leverage Networks
Leaders need to develop networks systematically to achieve organizational goals because it is no longer possible to work within silos. Leaders should:
- Use networks to gain insights into complex problems
- Create networks to enhance influence
- Treat networks as mutually rewarding relationships
- Develop a robust profile that can be adapted for different networks
Capability 3: Think and Act “Global”
Leaders should benchmark using global best practices but adapt them to local challenges and the needs of their organization. This requires:
- Identifying what great companies are achieving and seeking data to understand underlying patterns
- Developing perspectives to craft global strategies that incorporate regional and local tactics
- Learning how to navigate culturally complex and nuanced business solutions
Capability 4: Inspire Engagement
Leaders must foster a culture that creates meaningful connections between employees and organizational values. This means:
- Understanding individual values and needs
- Shaping projects and assignments to meet different values and needs
- Fostering an inclusive work environment with plenty of psychological safety
- Developing meaningful conversations and both providing and receiving feedback
- Sharing advice and asking for help when needed
Capability 5: Communicate Strategically
Leaders should develop crisp, clear messaging adaptable to various audiences and focus on active listening. Achieving this involves:
- Creating their communication style and finding their unique voice
- Developing storytelling skills
- Utilizing a data-centric and fact-based approach to communication
- Continual practice and asking for feedback
Capability 6: Foster Innovation
Leaders should collaborate with experts to build a reliable innovation process and a creative environment where new ideas thrive. Essential steps include:
- Developing or enhancing structures that best support innovation
- Documenting and refreshing innovation systems, processes, and practices
- Encouraging people to discover new ways of working
- Aiming to think outside the box and develop a growth mindset
- Trying to be as “tech-savvy” as possible
Capability 7: Cultivate Learning Agility
Leaders should always seek out and learn new things and not be afraid to ask questions. This involves:
- Adopting a lifelong learning mindset
- Seeking opportunities to discover new approaches and skills
- Enhancing problem-solving skills
- Reviewing both successful and unsuccessful case studies
Capability 8: Develop Personal Adaptability
Leaders should be focused on being effective when facing uncertainty and adapting to change with vigor. Therefore, leaders should:
- Be flexible about their approach to facing challenging situations
- Build resilience by effectively managing stress, time, and energy
- Recognize when past approaches do not work in current situations
- Learn from and capitalize on mistakes
Curiosity and Adaptability
With the eight key capabilities in mind, Lerner suggests that curiosity and adaptability are the key skills that everyone needs to thrive in the current environment.
He also advocates for lifelong learning and teaches several key courses at OPIT which can lead to a Bachelor’s Degree in Digital Business.
Have questions?
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We are international
We can speak in: