Source:


By Francesco Derchi

Purpose is a strategic tool for driving innovation, competitive advantage, and addressing AI challenges, writes Francesco Derchi.

Since the early 2000s, technology has dominated discussions among scholars and professionals about global development and economic trends, with the first wave of research regarding the internet’s impact on firms and society focusing on the enabling potential of technologies. The concept of “digital revolution,” as popularized by Nicholas Negroponte, became the new paradigm for broader considerations about the development of the firm’s macro environment, and how businesses could leverage it as an asset for creating competitive advantage. The following wave focused on the convergence of different technologies, such as manufacturing, and included the dynamics of coexistence between humans and machines. From the management side, the major challenges are related to defining effective digital transformation practices that could help to migrate organizations and exploit this new paradigm.

The current technological focus builds on these previous trends, particularly on artificial intelligence and more recently on the emergence of generative AI. The Age of AI is characterized by technology’s power to reshape business and society on a variety of levels. While AI’s pervasive impact is not new for firms, the mainstream adoption of ChatGPT for business purposes and the response to this ready adoption from big tech players like Microsoft, Google, and more recently Apple, shows how AI is reshaping and influencing companies’ strategic priorities.

From a research perspective, AI’s societal impact is inspiring new studies in the field of ethics. Luciano Floridi, now of Yale University, has identified several challenges for AI, characterizing them by global magnitudes like its environmental impact and has identified several challenges for AI security, including intellectual property, privacy, transparency, and accountability. In his work, Floridi underlines the importance of philosophy in defining problems and designing solutions – but it is equally important to consider how these challenges can be addressed at the firm level. What are the tools for managers?

Part of the answer may lie in the increasing and recent focus of management studies around “corporate purpose” and “brand purpose.” This trend represents an important attempt to deepen our understanding of “why to act” (purpose framing) and “how to act” (purpose formalizing and internalizing), while technology management studies address the “what to act” (purpose impacting) question. Furthermore, studies show that corporate purpose is critical for both digital native firms as well as traditional companies undergoing a digital transformation, serving as an important growth engine through purpose-driven innovation. It is therefore fair to ask: can purpose help in addressing any of the AI challenges previously mentioned?

Purpose concepts are not exclusively “cause-related” like CSR and environmental impact. Other types have emerged, such as “competence” (the function of the product) and “culture” (the intent that drives the business). This broadens the consideration of impact types that can help address specific challenges in the age of AI.

Purpose-driven organizations are not new. Take Tesla’s direction “to accelerate the world’s transition to sustainable energy” – it explicitly addresses environmental challenges while defining a business direction that requires constant innovation and leverages multiple converging technologies. The key is to have the purpose formalized and internalized within the company as a concrete drive for growth.

Due to its characteristics, the MTP plays a key role in digital transformation. This necessarily ambitious and long-term vision or goal – the Massive Transformative Purpose – requires firms, particularly those focused on exponential growth, to address emerging accelerating technologies with a purpose-first transformation logic. P&G’s Global Business Services division was able to improve market leadership and gain a competitive advantage over various start-ups and potential disruptors through its “Free up the employee, for free” MTP. This served as a north star for every employee, encouraging them to contribute ideas and best practices to overcome bulky processes and limitations.

My research on MTPs in AI-era firms explores their role in driving innovation to address specific challenges. Results show that the MTP impacts the organization across four dimensions, requiring commitment and synergy from management. Let’s consider these four dimensions by looking at Airbnb:

  1. Internal Impact: The MTP acts as the organization’s genetic code and guiding philosophy. It is key for leveraging employee motivation, with a strong relationship between purpose, organizational culture, and firm values. Airbnb’s culture of belonging highlights this, with its various purpose-shaping practices, starting with culture-fit interviews delivered during the recruitment process.
  2. Brand and Market Influence: The MTP contributes directly to building a strong brand and influencing the market. It allows firms to extend beyond functional and symbolic benefits to make the impact of the company on society visible. This involves addressing market demand coherently and consistently. Airbnb’s “Bélo” symbol visually represents this concept of belonging while their MTP features in campaigns like “Wall and Chain: A Story of Breaking Down Walls.”
  3. Competitive Advantage and Growth: The MTP drives innovation and can lead to superior stock market performance. In digital firms, it’s key in the creation of ecosystems that aggregate leveraged assets and third parties for value creation. The company’s “belong anywhere transformation journey” is a strategic initiative that formalized and interiorized the MTP through various touchpoints for all the different ecosystem members. As Leigh Gallagher details in her 2016 Fortune feature about the company, “When travellers leave their homes, they feel alone. They reach their Airbnb, and they feel accepted and taken care of by their host. They then feel safe to be the same kind of person they are when they’re at home.”
  4. Core Organization Identity: The MTP is considered part of the core dimension of the organization. More than a goal or business strategy, it is a strategic issue that generates a sense of direction and purpose that affects every part of the organization: internal, external, personality, and expression. This dimension also involves the role of the founder(s) and their personality in shaping the business. At Airbnb, the MTP is often used as a shortcut to explain the firm’s mission and vision. The founders’ approach is pragmatic, and instead of debating differences, time should be spent on execution. At the same time, the personalities of the three founders, Chesky, Gebbia, and Blecharcyzk, are the identity of the firm. They were the first hosts for the platform. Their credibility is key for making Airbnb a trustworthy and coherent proposal in a crowded market.

Executives and leaders of business in the current AI era should embrace three key principles. Be true: Purpose is an essential strategic tool that enables firms to identify and connect with their original selves, decoding their reason for being and embedding it into their identity. Be ambitious: The MTP allows for global impact, confronting major challenges by synthesizing business values and guiding innovation paths to address AI-related issues. Be generous: Purpose allows firms to explicitly address environmental and social issues, taking action on values-based challenges such as transparency, respect for intellectual property, and accountability. By following these principles, organizations and their leaders can maintain their direction and continue to advance in the AI era.

Read the full article below:

Related posts

Wired: Think Twice Before Creating That ChatGPT Action Figure
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025 6 min read

Source:

  • Wired, published on May 01st, 2025

People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.

By Kate O’Flaherty

At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.

All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.

The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.

Hidden Data

The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”

OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.

This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.

OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.

Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.

Uncanny Likeness

In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.

However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.

Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.

The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”

OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.

Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.

ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data⁠ by default, according to the company.

Read the full article below:

Read the article
LADBible and Yahoo News: Viral AI trend could present huge privacy concerns, says expert
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025 4 min read

Source:


You’ve probably seen them all over Instagram

By James Moorhouse

Experts have warned against participating in a viral social media trend which sees people use ChatGPT to create an action figure version of themselves.

If you’ve spent any time whatsoever doomscrolling on Instagram or TikTok or dare I say it, LinkedIn recently, you’ll be all too aware of the viral trend.

Obviously, there’s nothing more entertaining and frivolous than seeing AI generated versions of your co-workers and their cute little laptops and piña coladas, but it turns out that it might not be the best idea to take part.

There may well be some benefits to artificial intelligence but often it can produce some pretty disturbing results. Earlier this year, a lad from Norway sued ChatGPT after it falsely claimed he had been convicted of killing two of his kids.

Unfortunately, if you don’t like AI, then you’re going to have to accept that it’s going to become a regular part of our lives. You only need to look at WhatsApp or Facebook messenger to realise that. But it’s always worth saying please and thank you to ChatGPT just in case society does collapse and the AI robots take over, in the hope that they treat you mercifully. Although it might cost them a little more electricity.

Anyway, in case you’re thinking of getting involved in this latest AI trend and sharing your face and your favourite hobbies with a high tech robot, maybe don’t. You don’t want to end up starring in your own Netflix series, à la Black Mirror.

Tom Vazdar, area chair for cybersecurity at Open Institute of Technology, spoke with Wired about some of the dangers of sharing personal details about yourself with AI.

Every time you upload an image to ChatGPT, you’re potentially handing over ‘an entire bundle of metadata’ he revealed.

Vazdar added: “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.

“Because platforms like ChatGPT operate conversationally, there’s also behavioural data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

Essentially, if you upload a photo of your face, you’re not just giving AI access to your face, but also the whatever is in the background, such as the location or other people that might feature.

Vazdar concluded: “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

While we’re at it, maybe stop using ChatGPT for your university essays and general basic questions you can find the answer to on Google as well. The last thing you need is AI knowing you don’t know how to do something basic if it does takeover the world.

Read the full article below:

Read the article