Source:

  • The Yuan, Published on October 25th, 2024.

By Zorina Alliata

Artificial intelligence is a classic example of a mismatch between perceptions and reality, as people tend to overlook its positive aspects and fear it far more than what is warranted by its actual capabilities, argues AI strategist and professor Zorina Alliata.

ALEXANDRIA, VIRGINIA – In recent years, artificial intelligence (AI) has grown and developed into something much bigger than most people could have ever expected. Jokes about robots living among humans no longer seem so harmless, and the average person began to develop a new awareness of AI and all its uses. Unfortunately, however – as is often a human tendency – people became hyper-fixated on the negative aspects of AI, often forgetting about all the good it can do. One should therefore take a step back and remember that humanity is still only in the very early stages of developing real intelligence outside of the human brain, and so at this point AI is almost like a small child that humans are raising.

AI is still developing, growing, and adapting, and like any new tech it has its drawbacks. At one point, people had fears and doubts about electricity, calculators, and mobile phones – but now these have become ubiquitous aspects of everyday life, and it is not difficult to imagine a future in which this is the case for AI as well.

The development of AI certainly comes with relevant and real concerns that must be addressed – such as its controversial role in education, the potential job losses it might lead to, and its bias and inaccuracies. For every fear, however, there is also a ray of hope, and that is largely thanks to people and their ingenuity.

Looking at education, many educators around the world are worried about recent developments in AI. The frequently discussed ChatGPT – which is now on its fourth version – is a major red flag for many, causing concerns around plagiarism and creating fears that it will lead to the end of writing as people know it. This is one of the main factors that has increased the pessimistic reporting about AI that one so often sees in the media.

However, when one actually considers ChatGPT in its current state, it is safe to say that these fears are probably overblown. Can ChatGPT really replace the human mind, which is capable of so much that AI cannot replicate? As for educators, instead of assuming that all their students will want to cheat, they should instead consider the options for taking advantage of new tech to enhance the learning experience. Most people now know the tell-tale signs for identifying something that ChatGPT has written. Excessive use of numbered lists, repetitive language and poor comparison skills are just three ways to tell if a piece of writing is legitimate or if a bot is behind it. This author personally encourages the use of AI in the classes I teach. This is because it is better for students to understand what AI can do and how to use it as a tool in their learning instead of avoiding and fearing it, or being discouraged from using it no matter the circumstances.

Educators should therefore reframe the idea of ChatGPT in their minds, have open discussions with students about its uses, and help them understand that it is actually just another tool to help them learn more efficiently – and not a replacement for their own thoughts and words. Such frank discussions help students develop their critical thinking skills and start understanding their own influence on ChatGPT and other AI-powered tools.

By developing one’s understanding of AI’s actual capabilities, one can begin to understand its uses in everyday life. Some would have people believe that this means countless jobs will inevitably become obsolete, but that is not entirely true. Even if AI does replace some jobs, it will still need industry experts to guide it, meaning that entirely new jobs are being created at the same time as some older jobs are disappearing.

Adapting to AI is a new challenge for most industries, and it is certainly daunting at times. The reality, however, is that AI is not here to steal people’s jobs. If anything, it will change the nature of some jobs and may even improve them by making human workers more efficient and productive. If AI is to be a truly useful tool, it will still need humans. One should remember that humans working alongside AI and using it as a tool is key, because in most cases AI cannot do the job of a person by itself.

Is AI biased?

Why should one view AI as a tool and not a replacement? The main reason is because AI itself is still learning, and AI-powered tools such as ChatGPT do not understand bias. As a result, whenever ChatGPT is asked a question it will pull information from anywhere, and so it can easily repeat old biases. AI is learning from previous data, much of which is biased or out of date. Data about home ownership and mortgages, e.g., are often biased because non-white people in the United States could not get a mortgage until after the 1960s. The effect on data due to this lending discrimination is only now being fully understood.

AI is certainly biased at times, but that stems from human bias. Again, this just reinforces the need for humans to be in control of AI. AI is like a young child in that it is still absorbing what is happening around it. People must therefore not fear it, but instead guide it in the right direction.

For AI to be used as a tool, it must be treated as such. If one wanted to build a house, one would not expect one’s tools to be able to do the job alone – and AI must be viewed through a similar lens. By acknowledging this aspect of AI and taking control of humans’ role in its development, the world would be better placed to reap the benefits and quash the fears associated with AI. One should therefore not assume that all the doom and gloom one reads about AI is exactly as it seems. Instead, people should try experimenting with it and learning from it, and maybe soon they will realize that it was the best thing that could have happened to humanity.

Read the full article below:

Related posts

Wired: Think Twice Before Creating That ChatGPT Action Figure
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025 6 min read

Source:

  • Wired, published on May 01st, 2025

People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.

By Kate O’Flaherty

At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.

All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.

The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.

Hidden Data

The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”

OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.

This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.

OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.

Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.

Uncanny Likeness

In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.

However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.

Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.

The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”

OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.

Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.

ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data⁠ by default, according to the company.

Read the full article below:

Read the article
LADBible and Yahoo News: Viral AI trend could present huge privacy concerns, says expert
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025 4 min read

Source:


You’ve probably seen them all over Instagram

By James Moorhouse

Experts have warned against participating in a viral social media trend which sees people use ChatGPT to create an action figure version of themselves.

If you’ve spent any time whatsoever doomscrolling on Instagram or TikTok or dare I say it, LinkedIn recently, you’ll be all too aware of the viral trend.

Obviously, there’s nothing more entertaining and frivolous than seeing AI generated versions of your co-workers and their cute little laptops and piña coladas, but it turns out that it might not be the best idea to take part.

There may well be some benefits to artificial intelligence but often it can produce some pretty disturbing results. Earlier this year, a lad from Norway sued ChatGPT after it falsely claimed he had been convicted of killing two of his kids.

Unfortunately, if you don’t like AI, then you’re going to have to accept that it’s going to become a regular part of our lives. You only need to look at WhatsApp or Facebook messenger to realise that. But it’s always worth saying please and thank you to ChatGPT just in case society does collapse and the AI robots take over, in the hope that they treat you mercifully. Although it might cost them a little more electricity.

Anyway, in case you’re thinking of getting involved in this latest AI trend and sharing your face and your favourite hobbies with a high tech robot, maybe don’t. You don’t want to end up starring in your own Netflix series, à la Black Mirror.

Tom Vazdar, area chair for cybersecurity at Open Institute of Technology, spoke with Wired about some of the dangers of sharing personal details about yourself with AI.

Every time you upload an image to ChatGPT, you’re potentially handing over ‘an entire bundle of metadata’ he revealed.

Vazdar added: “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.

“Because platforms like ChatGPT operate conversationally, there’s also behavioural data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

Essentially, if you upload a photo of your face, you’re not just giving AI access to your face, but also the whatever is in the background, such as the location or other people that might feature.

Vazdar concluded: “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

While we’re at it, maybe stop using ChatGPT for your university essays and general basic questions you can find the answer to on Google as well. The last thing you need is AI knowing you don’t know how to do something basic if it does takeover the world.

Read the full article below:

Read the article