By Francesco Derchi, Chair in Digital Business of OPIT – Open Institute of Technology
Read the full article below (in Italian):
Source:
By Francesco Derchi, Chair in Digital Business of OPIT – Open Institute of Technology
Today, organizations are facing increasing pressure to achieve immediate results, often at the expense of a long-term vision. The ongoing economic uncertainty, fueled by stress, anxiety, global conflicts and technological evolution, seems to push many companies to focus only on short-term profits. However, a concept that could radically change the perspective on business is emerging with force: corporate purpose. Fuelling this debate, more and more professionals and experts who see in the definition of a purpose not only an ethical value, but a real strategic lever to improve corporate competitiveness.
Public debate about a company’s ultimate purpose increased fivefold between 1995 and 2016. Recent research finds that corporate purpose is no longer just a statement of intent, but a guiding principle that can shape operations, define corporate culture, and even positively impact bottom lines.
Historically, companies have always had a clear “reason for being,” often recognized and granted by governments. Their existence was not limited to immediate profit, but responded to a social mandate: to generate value for the community. This concept, which has its roots in the industrial revolution and even in the Roman Empire, has been lost over time, replaced by the logic of maximizing short-term profit.
Some experts, however, believe that the time has come to return to a model that places purpose at the center of corporate strategy, without sacrificing profitability. This does not mean denying the need for economic results, but reorienting the company towards a broader objective, which integrates ethics and sustainability. The challenge, therefore, is to find a virtuous model that allows companies to reconcile profits and social responsibility.
Several studies converge on similar data: companies that act consistently with a clear objective outperform the market by 42%. On the contrary, the simple definition of a purpose, without a real integration into daily practices, does not lead to significant results. Finally, companies without a purpose show a lower performance of 42%.
In terms of valuation, over a 12-year period, companies driven by a clear purpose have seen a 175% increase in brand value, compared to an average growth rate of 86% ( source: BCG BrightHouse). This trend is also reflected in consumer behavior: 88% prefer to buy products from companies driven by a clear purpose rather than from companies without a clear direction.
Read the full article below:
Source:
By Lokesh Vij, Professor of Cloud Computing Infrastructure, Cloud Development, Cloud Computing Automation and Ops and Cloud Data Stacks at OPIT – Open Institute of Technology
Read the full article below (in Italian):
You’ve probably seen two of the most recent popular social media trends. The first is creating and posting your personalized action figure version of yourself, complete with personalized accessories, from a yoga mat to your favorite musical instrument. There is also the Studio Ghibli trend, which creates an image of you in the style of a character from one of the animation studio’s popular films.
Both of these are possible thanks to OpenAI’s GPT-4o-powered image generator. But what are you risking when you upload a picture to generate this kind of content? More than you might imagine, according to Tom Vazdar, chair of cybersecurity at the Open Institute of Technology (OPIT), in a recent interview with Wired. Let’s take a closer look at the risks and how this issue ties into the issue of responsible artificial intelligence.
To get a personalized image of yourself back from ChatGPT, you need to upload an actual photo, or potentially multiple images, and tell ChatGPT what you want. But in addition to using your image to generate content for you, OpenAI could also be using your willingly submitted image to help train its AI model. Vazdar, who is also CEO and AI & Cybersecurity Strategist at Riskoria and a board member for the Croatian AI Association, says that this kind of content is “a gold mine for training generative models,” but you have limited power over how that image is integrated into their training strategy.
Plus, you are uploading much more than just an image of yourself. Vazdar reminds us that we are handing over “an entire bundle of metadata.” This includes the EXIF data attached to the image, such as exactly when and where the photo was taken. And your photo may have more content in it than you imagine, with the background – including people, landmarks, and objects – also able to be tied to that time and place.
In addition to this, OpenAI also collects data about the device that you are using to engage with the platform, and, according to Vazdar, “There’s also behavioral data, such as what you typed, what kind of image you asked for, how you interacted with the interface and the frequency of those actions.”
After all that, OpenAI knows a lot about you, and soon, so could their AI model, because it is studying you.
OpenAI claims that they did not orchestrate these social media trends simply to get training data for their AI, and that’s almost certainly true. But they also aren’t denying that access to that freely uploaded data is a bonus. As Vazdar points out, “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”
OpenAI isn’t the only company using your data to train its AI. Meta recently updated its privacy policy to allow the company to use your personal information on Meta-related services, such as Facebook, Instagram, and WhatsApp, to train its AI. While it is possible to opt-out, Meta isn’t advertising that fact or making it easy, which means that most users are sharing their data by default.
You can also control what happens with your data when using ChatGPT. Again, while not well publicized, you can use ChatGPT’s self-service tools to access, export, and delete your personal information, and opt out of having your content used to improve OpenAI’s model. Nevertheless, even if you choose these options, it is still worth it to strip data like location and time from images before uploading them and to consider the privacy of any images, including people and objects in the background, before sharing.
OpenAI and Meta need to provide these kinds of opt-outs due to data protection laws, such as GDPR in the EU and the UK. GDPR gives you the right to access or delete your data, and the use of biometric data requires your explicit consent. However, your photo only becomes biometric data when it is processed using a specific technical measure that allows for the unique identification of an individual.
But just because ChatGPT is not using this technology, doesn’t mean that ChatGPT can’t learn a lot about you from your images.
But you might wonder, “Isn’t it a good thing that AI is being trained using a diverse range of photos?” After all, there have been widespread reports in the past of AI struggling to recognize black faces because they have been trained mostly on white faces. Similarly, there have been reports of bias within AI due to the information it receives. Doesn’t sharing from a wide range of users help combat that? Yes, but there is so much more that could be done with that data without your knowledge or consent.
One of the biggest risks is that the data can be manipulated for marketing purposes, not just to get you to buy products, but also potentially to manipulate behavior. Take, for instance, the Cambridge Analytica scandal, which saw AI used to manipulate voters and the proliferation of deepfakes sharing false news.
Vazdar believes that AI should be used to promote human freedom and autonomy, not threaten it. It should be something that benefits humanity in the broadest possible sense, and not just those with the power to develop and profit from AI.
OPIT’s Master’s in Responsible AI combines technical expertise with a focus on the ethical implications of AI, diving into questions such as this one. Focusing on real-world applications, the course considers sustainable AI, environmental impact, ethical considerations, and social responsibility.
Completed over three or four 13-week terms, it starts with a foundation in technical artificial intelligence and then moves on to advanced AI applications. Students finish with a Capstone project, which sees them apply what they have learned to real-world problems.
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We can speak in: